名词解释题Broadcom

名词解释题
Broadcom

参考解析

解析: 暂无解析

相关考题:

资料:Acknowledging that so-called cloud computing will blur the distinctions between computers and networks, about two dozen big information technology companies plan to announce a new standards-setting group for computer networking. The group, to be called the Open Networking Foundation, hopes to help standardize a set of technologies pioneered at Stanford and the University of California, Berkeley, and meant to make small and large networks programmable in much the same way that individual computers are.  The changes, if widely adopted, would have implications for global telecommunications networks and large corporate data centers, but also for small household networks. The benefits, proponents say, would be more flexible and secure networks that are less likely to suffer from congestion. Someday, they say, networks might even be less expensive to build and operate. The new approach could allow for setting up on-demand "express lanes" for voice and data traffic that is time-sensitive. Or it might let big telecommunications companies, like Verizon or AT&T, use software to combine several fiber optic backbones temporarily for particularly heavy information loads and then have them automatically separate when a data rush hour is over. For households, the new capabilities might let Internet service providers offer remote services like home security or energy control.  The foundation's organizers also say the new technologies will offer ways to improve computer security and could possibly enhance individual privacy within the e-commerce and social networking markets. Those markets are the fastest-growing uses for computing and network resources. While the new capabilities could be crucial to network engineers, for business users and consumers the changes might be no more noticeable than advances in plumbing, heating and air-conditioning. Everything might work better, but most users would probably not know- or care- why or how.  The members of the Open Networking Foundation will include Broadcom, Brocade, Ciena, Cisco, Citrix, Dell, Deutsche Telekom, Ericsson, Facebook, Force10, Google, Hewlett-Packard, I.B.M., Juniper, Marvell, Microsoft, NEC, Netgear, NTT, Riverbed Technology, Verizon, VMWare and Yahoo. "This answers a question that the entire industry has had, and that is how do you provide owners and operators of large networks with the flexibility of control that they want in a standardized fashion." said Nick McKeown, a professor of electrical engineering and computer science at Stanford, where his and colleagues' work forms part of the technical underpinnings, called OpenFlow.   The effort is a departure from the traditional way the Internet works. As designed by military and academic experts in the 1960s, the Internet has been based on interconnected computers that send and receive packets of data, paying little heed to the content and making few distinctions among the various types of senders and receivers of information. The intelligence in the original Internet was meant to reside largely at the end points of the network-the computers-while the specialized routing computers were relatively dumb post offices of various size, mainly confined to reading addresses and transferring packets of data to adjacent systems. But these days, when cloud computing means a lot of the information is stored and processed on computers out on the network, there is growing need for more intelligent control systems to orchestrate the behavior of thousands of routing machines. It will make it possible, for example, for managers of large networks to program their network to prioritize certain types of data, perhaps to ensure quality of service or to add security to certain portions of a network. The designers argue that because OpenFlow should open up hardware and software systems that control the flow of Internet data packets, systems that have been closed and proprietary, it will cause a new round of innovation focused principally upon the vast computing systems known as cloud computers.  What is the main purpose of the Open NetwoA.To make networks less expensive to build and operateB.To enhance the capabilities of network engineersC.To set new standards for computer networkingD.To promote cloud computing

资料:Acknowledging that so-called cloud computing will blur the distinctions between computers and networks, about two dozen big information technology companies plan to announce a new standards-setting group for computer networking. The group, to be called the Open Networking Foundation, hopes to help standardize a set of technologies pioneered at Stanford and the University of California, Berkeley, and meant to make small and large networks programmable in much the same way that individual computers are.  The changes, if widely adopted, would have implications for global telecommunications networks and large corporate data centers, but also for small household networks. The benefits, proponents say, would be more flexible and secure networks that are less likely to suffer from congestion. Someday, they say, networks might even be less expensive to build and operate. The new approach could allow for setting up on-demand "express lanes" for voice and data traffic that is time-sensitive. Or it might let big telecommunications companies, like Verizon or AT&T, use software to combine several fiber optic backbones temporarily for particularly heavy information loads and then have them automatically separate when a data rush hour is over. For households, the new capabilities might let Internet service providers offer remote services like home security or energy control.  The foundation's organizers also say the new technologies will offer ways to improve computer security and could possibly enhance individual privacy within the e-commerce and social networking markets. Those markets are the fastest-growing uses for computing and network resources. While the new capabilities could be crucial to network engineers, for business users and consumers the changes might be no more noticeable than advances in plumbing, heating and air-conditioning. Everything might work better, but most users would probably not know- or care- why or how.  The members of the Open Networking Foundation will include Broadcom, Brocade, Ciena, Cisco, Citrix, Dell, Deutsche Telekom, Ericsson, Facebook, Force10, Google, Hewlett-Packard, I.B.M., Juniper, Marvell, Microsoft, NEC, Netgear, NTT, Riverbed Technology, Verizon, VMWare and Yahoo. "This answers a question that the entire industry has had, and that is how do you provide owners and operators of large networks with the flexibility of control that they want in a standardized fashion." said Nick McKeown, a professor of electrical engineering and computer science at Stanford, where his and colleagues' work forms part of the technical underpinnings, called OpenFlow.   The effort is a departure from the traditional way the Internet works. As designed by military and academic experts in the 1960s, the Internet has been based on interconnected computers that send and receive packets of data, paying little heed to the content and making few distinctions among the various types of senders and receivers of information. The intelligence in the original Internet was meant to reside largely at the end points of the network-the computers-while the specialized routing computers were relatively dumb post offices of various size, mainly confined to reading addresses and transferring packets of data to adjacent systems. But these days, when cloud computing means a lot of the information is stored and processed on computers out on the network, there is growing need for more intelligent control systems to orchestrate the behavior of thousands of routing machines. It will make it possible, for example, for managers of large networks to program their network to prioritize certain types of data, perhaps to ensure quality of service or to add security to certain portions of a network. The designers argue that because OpenFlow should open up hardware and software systems that control the flow of Internet data packets, systems that have been closed and proprietary, it will cause a new round of innovation focused principally upon the vast computing systems known as cloud computers.  It can be inferred from the passage that____A.The open networking foundation will be led by Stanford and the university of California, Berkeley.B.With the setting of new standards, operators of large networks will have more flexibility of control.C.People will have a better understanding of the distinctions between computers and networks thanks to cloud computing.D.Cloud computing will involve more routing computers than the traditional internet.

During the modem boot process,how does the modem acquire the downstream channel?()A、The modem is commanded by the CMTS to set to the specific channelB、The modem uses the default value in the broadcom chipsetC、The modem tuner sets a level as defined by the DOCSIS specificationD、The modem tuner scans the downstream spectrum until digital QAM modulated signal is encountered

如果AP空口传输速率可以达到(),那么它要求无线网卡芯片厂家为。A、108MB、AtherosC、100MD、Broadcom

A customer is planning to implement a database server for their high volume online business.  Downtime at any point represents a significant loss of revenue, so performance and high availability are extremely important.They requested a 4-way configuration with 8GB RAM and four gigabit Ethernet connections.Which of the following solutions is the most appropriate?()A、x445 4-way Xeon DP machine using 16 x 512MB DIMMS, ServeRAID 6M adapter and Ultra320 15K Hard Disk drives in a RAID-1 set with an additional two BroadCom Gigabit NICsB、x366 4-way Xeon MP machine using 16 x 512MB DIMMS, ServeRAID 6M adapter and Ultra320 15K Hard Disk drives in a RAID-1 set with an additional two Intel Gigabit NICsC、x445 4-way Xeon MP machine using 16 x 512MB DIMMS, ServeRAID 6M adapter and Ultra320 15K Hard Disk drives in a RAID-1 set with an additional two BroadCom Gigabit NICsD、x445 4-way Xeon DP machine using 16 x 512MB DIMMS, ServeRAID 6M adapter and Ultra320 15K Hard Disk drives in a RAID-1 set with an additional two Intel Gigabit NICs

单选题During the modem boot process,how does the modem acquire the downstream channel?()AThe modem is commanded by the CMTS to set to the specific channelBThe modem uses the default value in the broadcom chipsetCThe modem tuner sets a level as defined by the DOCSIS specificationDThe modem tuner scans the downstream spectrum until digital QAM modulated signal is encountered