Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Dawning of the digital security border: A new standard for wide area perimeter and border security

The digital age is upon us, with more technologies shifting from the physical to the virtual in order to simplify and improve operational effectiveness. The use of digital communications technologies with interconnected low-power sensors has been of particular interest to security operations where human monitoring and patrols present significant challenges.

Picture the U.S. borders — land or ocean — which feature large stretches of vacated terrain that require constant monitoring and security. The U.S.-Mexico border, for example, is approximately 2,000 miles long while the U.S.-Canada border is nearly 4,000 square miles. The coastlines of the U.S., including Alaska and Hawaii, represent another 12,500 miles. All in all, the U.S. has over 18,000 miles of border to monitor. To effectively monitor and patrol these borders using only human monitoring represents a significant and costly challenge with many vulnerabilities. However, with low-power sensor technology connected to long-range wide area wireless communications, effective monitoring becomes attainable.

Technology-based perimeter security in the U.S. has had its fair share of failures over the last 15 to 20 years. In Jan. 2011, the Department of Homeland Security canceled SBI-Net, a technology-based “virtual-fence,” after having spent $1 billion. This was due to technical issues with SBI-Net’s viability and cost — two issues that companies and agencies looking to establish a digital perimeter still face today. As we continue to see even larger projects proposed, such as the current administration’s push to establish a physical walled border, budget concerns to make this project a reality have overtaken the complexities of using more efficient digital technologies for perimeter monitoring.

However, a new proven industrial wireless standard known as IEEE 802.16s offers great promise to deliver the necessary coverage and security requirements of border security at a reasonable cost. This recently published wireless standard was designed from the ground up to meet the wide area coverage needs of industrial and security networks.

The standard details how to use software-defined radio (SDR) technology to transmit broadband data over a very long range with high upstream capacity. Based on the standard design, each base station tower is capable of 3,000 square miles of data coverage, thereby allowing wireless sensor and monitoring technologies to be deployed over a wide territory with minimal infrastructure. This coverage not only includes the actual perimeter line, but also spans up to 30 miles in either direction of the cell tower — creating a wider protection range rather than a single barrier.

Remote SDRs are similar to cellular modems, but far more powerful in their range. And once coverage is in place, they can be connected to low-cost sensor networks that provide intelligence and real-time feedback. This is particularly helpful in connecting and enhancing efficiency with IoT technologies used for perimeter security, such as thermal imaging, temperature or chemical monitoring, event-based video streaming and night vision.

An important part of the standard is frequency selection. For homeland security efforts such as the deployment of a digital border, the government already has access to VHF ranges that can be used for maximum coverage and minimal infrastructure.

The use of licensed spectrum also addresses the security concern with IoT technologies used for perimeter security. SBI-Net was based on low-power, consumer-based Wi-Fi technology capable of only short-range communications and vulnerable to interference. SDR technologies over licensed spectrum scale for wide areas and can create a closed-loop system, providing a digital and physical separation of wireless sensors and monitors from the public internet.

Digital perimeters will continue to be used due to their ease and efficiency once they’re established. The challenge, however, is establishing them in the first place. The new 802.16s industrial standard seems to be the right place to start.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

Join the conversation

15 comments

Send me notifications when other members comment.

Please create a username to comment.

Will open source virtualization ever become mainstream?
Cancel
Yes, because "mainstream" needs it! #simple
Cancel
Not in the short term.
Cancel
maybe after at least 5 years
Cancel
If mainstream means a significant % or majority of workloads running on it, then YES but many may not even be aware (or care) they are using it … running appliances and cloud providers that incorporate OSS
Cancel
not across all sectors but maybe in sectors like higher education – colleges / universities.
Cancel
In my company, there is no reason to transfer the licensing budget for FTEs to maintain an Open Source solution. Remember Linux vs. Windows? Is Linux the main OS today for the Server or Desktop?
Cancel
VMware is the market leader, MS is not ready yet but 2nd, I don't see any of the Open Source offerings becoming mainstream...
Cancel
It is a hybrid world, there will always be some open source but I believe commercial offerings are going to dominate it.
Cancel
KVM/Xen against VMware or Microsoft? Really?
Cancel
Heterogeneous virtualization = Higher OPEX costs.
Cancel
What about OSS virt solutions used via Linux distributions? Are you counting them too?
Cancel
The growth of Linux in data centre is increasing rapidly which will ensure adaption of open source virtualization as feature wise there offering is increasing tremendously.
Cancel
As Linux is the main OS now for servers, opensource will likely be leading for Virtualization unless Vmware drops it's enterprise prices to about 1/3rd of what they are today. That said, it is going to take years at a minimum, and if Vmware became aggressive in pricing it might not happen.
Cancel
It depends on how you define the "market". If you count sheer number of installs, then Linux/KVM may be way ahead. But if you count revenue, then VMware is probably ahead--their software costs a fortune.

Also keep in mind that nobody counts software that is not registered, such as CentOS/KVM. This is used widely throughout the world. So open source virtualization may already be way ahead. I use it all the time, it is very solid and capable, and fills all of my virtualization needs for $0 license fees.
Cancel

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close