Thursday, May 23, 2013

Will IPv6 ever have a killer app

The volume of IPv6 traffic, though still small, has grown steadily over the last year. Although most federal agencies missed the Sept. 30, 2012 deadline for enabling the new protocols on public facing Web sites, they are slowly adopting IPv6. Hurricane Electric, which bills itself as the world’s largest native IPv6 backbone, has announced that it has connected more than 2,000 IPv6 networks.
But the world still is waiting for a reason to make the move. To date, the main reason for transitioning to the new Internet Protocols is that you have to. The Office of Management and Budget told agencies in 2010 that they had to enable IPv6, and the pool of available IPv4 addresses is drying up. Anyone who wants large blocks of new addresses now must get them in IPv6.
So far, however, the new protocols are being used pretty much like the old ones. When will we see a killer app that will make people want to use IPv6, and what will it be?
There has been a lot of talk in the last decade about the improved security that can be achieved with IPv6, the new Internet of Things it will enable and the benefits of true end-to-end connectivity once everyone gets rid of Network Address Translation (NAT).  A global organization such as the Defense Department stands to benefit from access to a nearly endless supply of IP endpoints that could be used to monitor, track and control millions of things anywhere in the world.

But despite changes such as the rapid growth of mobile devices, we still are using IP devices pretty much the same way we have for years. Screens are smaller, keyboards are virtual and there is some location-specific functionality, but a mobile device essentially is a little IPv4 PC.
Owen DeLong, IPv6 evangelist for Hurricane Electric, obviously is a fan of the new protocols. He thinks doing away with NAT will be a good thing. So what does he think the killer app for IPv6 will be? “None,” he says. People don’t feel they are missing anything with IPv4 now, and the benefits of a new set of Internet Protocols are too complex for today’s short attention spans. “It’s not something you can explain to the average user in a 10-word sound bite,” he said.
But the interesting thing about killer apps is that, like the Spanish Inquisition, no one expects them. They are unplanned and become part of our lives before we know it. Is the next one already out there?
Are there any innovative uses of IPv6 by your agency or office? Has anyone found a use for the protocols that enables some functionality that was not practical before? Do you have a problem that you think IPv6 can solve? Drop me a line at wjackson@gcn.com and tell me if the new protocols are being used, how they are being used, or how you would like them to be used. Maybe we can identify a driver for the move to IPv6.

How hackers can turn the Internet of Things into a weapon

We are living in world of increasingly smart devices. Not really intelligent; just smart enough to be dangerous.
As more devices become IP-enabled, they contribute to the pool of things that can be recruited into botnets or other platforms used for distributed attacks. Distributing attacks make it more difficult to trace the source of the attack and also makes it easier to overwhelm a target. In the past year, distributed denial of service has become the attack of choice for activists and blackmailers.
Prolexic, a DDOS security company, has published a white paper on Distributed Reflection Denial of Service (DrDOS) attacks that focuses on a handful of protocols, including the Simple Network Management Protocol. SNMP is an application layer (Layer 7) protocol commonly used to manage devices with IP addresses.
“Unlike other DDOS and DrDOS attacks, SNMP attacks allow malicious actors to hijack unsecured network devices — such as routers, printers, cameras, sensors and other devices —  and use them as bots to attack third parties,” the report points out.
This is a concern not only because it increases the number of possible devices that can be compromised, but also because remote devices such as printers and sensors of every kind often are less likely to be properly managed and secured, leaving them open to exploit.
For public-sector agencies, this can include such devices as sensors used in weather observations, control valves at power plants, door locks in prisons, traffic signals and any number of other connected devices. A search engine such as Shodan can reveal those connected devices, many of which are completely without security,
SNMP uses the User Datagram Protocol, a stateless protocol that is subject to IP spoofing. A Reflection DOS attack using SNMP is a type of amplification attack, because an SNMP request generates a response that typically is at least three times larger. Boiled down to its basics, an attacker can port-scan a range of IP address to identify exploitable SNMP hosts. He sends an SNMP request to these hosts using the spoofed IP address of the target server, and the hosts’ replies saturate the target’s bandwidth, making it unavailable.
“The raw response size of the traffic is amplified significantly,” the report says. “This makes the SNMP reflection attack vector a powerful force.”
The best way to protect yourself from being shanghaied into such an attack is to identify all of the devices accessible on your network, whether or not they appear to be sensitive, and properly manage them. Prolexic offers a list of mitigations in its paper.
Remote management of and access to otherwise dumb devices can be a great convenience, but the trade-off is that it adds to the list of things that must be managed and secured.

Machine learning a growing force against online fraud

Machine learning digital faceA group of ex-Google employees has started a company that wants to expand the use of big data to spot fraud — a blight that costs taxpayers over $125 billion a year, and affects public-sector agencies involved in payments, collections and benefits — before it occurs.


San Francisco-based Sift Science says it has developed an algorithm that uses machine-learning techniques to stay ahead of new fraud tactics as they are introduced into its customers’ networks.
“Many anti-fraud technologies follow a set number, maybe 175 to 225 rules, against which to measure user behavior,” Sift Science co-founder Brandon Ballinger told GigaOm.
“The problem is fraudsters don’t follow the rules and change all the time.”
In contrast, machine learning is the ability of the system to process new patterns and react without specific rules or cues.
The Sift system trains itself by learning patterns of fraud as they appear on its customers sites, which become part of the machine-learning network. The company says its systems already incorporate over 1 million fraud patterns that help alert users to deceptive activity.
“As more sites join, it will learn more patterns to help everybody fight fraud more accurately, Ballinger added.
Sift Science uses APIs that allow customers to report events on their sites, then applies large-scale matching learning to assign a fraud score to each of the sites, users, according to the company’s description. The collection process has three principal components: On-page activity gathered by a JavaScript snippet, transactions reported from a customer’s server using Sift Science’s REST API, and labels of know fraudsters.

Running analytics on the collected patterns has revealed some unusual signs of probable fraud, according to the company. For instance, an item on an auction listed in all capital text by a seller is four times more likely to be fraudulent.
And people with Yahoo.com e-mail accounts are five times more likely to create a fake account than someone using Gmail.com, according to Sift Science records. The company recently opened up the fraud detection service for texting to the public.
Machine learning as a tool to fight fraud has been developing for a while. In 2006, researchers at Stanford University broke down several mathematical methods, concluding that, “machine learning methods are quite easily able to outperform current industry standards in detecting fraud.”
RSA, the security unit of EMC, has been using analytics to combat online fraud for years, and attributes an increase in detection rates over the past few years to greater use of machine learning, according to a report in the Guardian.
Machine learning systems also are becoming more common among government systems, including the Federal Aviation Administration’s Next Generation Air Transport System and the Energy Department’s Smart Gird program to create an interactive national power delivery system. 

Public-sector agencies and programs that are vulnerable to fraud, waste and abuse are becoming more dependent on analytic techniques to identify fraud.
Recently the state of Michigan deployed an SAS Analytics suite as the engine for its Enterprise Fraud Detection System to go after fraud its unemployment insurance programs.
And the inspector general of Illinois’ Department of Healthcare and Family Services is using analytics to tackle insurance claims fraud.

The First Egyptian Tablet (INAR)

Meet our Egyptian Tablet With  Name “INAR”

The Egyptian minister of telecommunication
announced 11/4/2013 in Cairo the production of
the first so-called Egyptian tablet.
The tablet’s name  is “ENAR” and I think that
 its Egyptian by 60%.
 Its operating system is Android 4. It will be distributed
 on students in secondary stage and university.
This makes me happy as Egyptian to see 
Katron(Egyptian Company for elctronics) once is competing 
electronics giants but seriously Can someone please
 tell Katron to improve their official website !? It is disgrace. 
By the way Katron is from the state owned 1960s companies 
and what you know about the 1960s !!?? Seriously speaking 
I am glad that we started to produce tablets 
Katron is going to produce in the upcoming 4 years units of
 “INAR” worth of LE 2 billion according to 
Ahram Online (Egyptian Magazine ). 
Actually it is going to be assembled in Egypt and
 it is not something bad but rather good step. 
The first 1,000 units will be produced before 
the end of the year. 
The specification of the tablet according to
 Al Masry Al Youm(Another Egyption Magazine) : 
  • Resolution : 720 X 1024 pixel , 9.7 inches
  • OS : Android 4.0
  • Video Camera primary : 2 MP
  • Video Camera secondary : 2 MP
  • Weight : 750 g
  • Memory : 8 to 32 GB storage / 1 GB RAM
  • Connections : WiFi and 3G

Ideas For Android

Google puts decades of Earth's changes into time-lapse animation

Google Earth Engine visualization of receding glacier in AlaskaGoogle, working with the U.S. Geological Survey, is corralling four decades of satellite images and using Google Earth Engine to create a time-lapse history of the Earth’s surface.


The interactive, video-like presentation of decades of Landsat images will help scientists track changes in the Earth’s surface, fueling research in detecting deforestation, estimating biomass and mapping the world’s roadless areas, Google said in an announcement on its blog.
Las Vegas' urban growth, as seen through Google Earth Engine time-lapse animation.
"This news is the latest example of how the Department of the Interior's policy of unrestricted access and free distribution of Landsat satellite imagery to the public fosters innovation and mutual awareness of environmental conditions around the globe," said Anne Castle, assistant secretary of the Interior for Water and Science. "The 40-year archive of Landsat images of every spot on Earth is a treasure trove of scientific information that can form the basis for myriad useful applications by commercial enterprises, government scientists and managers, the academic community, and the public at large."
The Google Earth Engine data sets, which span more than 25 years, have been available to researchers for a few years now, according to GigaOm. The Earth Engine application was developed in 2010 after the company entered into a 2009 partnership with the USGS to place its archive of earth imagery online. Google collected, cataloged and stored over 900 terabytes of USGS and NASA Landsat images from as far back as the 1970s and put it in the Google cloud. Some of the images were archived on tape drives, and some existed only as negatives and prints.

The first Landsat was launched by NASA in 1972 (USGS manages the program) and has been followed by seven others; the most recent, Landsat 8, was launched in February. In the four decades since then, NASA has collected millions of images, which, though useful, could be hard to search and put into context. In 2010, in fact, NASA launched its NASA Earth Exchange (NEX), a supercomputing-powered, social networking-linked virtual lab to get a handle on all the imagery and help speed the study of Earth sciences. The agency opened NEX to outside researchers last year, and even produced some time-lapse presentations of its own.
The Google Earth Engine project is more comprehensive, however. According to Time magazine, the media partner in the Timelapse project, Google had its hands full with the data conversion. “Even getting the already digitized EROS and Landsat images from the USGS to Google took some doing, necessitating the construction of a new digital pipeline that could handle the massive stream of data.”
Google then sifted through the data to find the best images (cloud cover-free) or digitally scrubbed images for every spot on Earth and for each year since 1984. Those images were compiled into “enormous planetary images, 1.78 terapixels each, one for each year,” Google said. Google then worked with the CREATE Lab at Carnegie Mellon University to convert the yearly Earth images from a crude flip-book-like presentation into a seamless, browsable HTML5 animation that is featured on Google’s Timelapse website.
Climate researchers aren’t the only ones interested in the Google Earth Engine data. According to Time, the government of Mexico enlisted Google to help determine the damage to the country’s ground cover. Google and Landsat created a visual survey “made up of 53,000 images, representing 18 terabytes of data that required 15,000 hours of computer time to complete.” But the job was completed in one long workday, thanks to the massive Google research cloud.
Every day Google downloads terabytes of satellite images from the USGS and maintains the files on spinning disks in data centers, GigaOm reported.



12 things you should know about mobile learning

mlearning man with smartphone in classroomMobile learning — designing training and educational courses for use on mobile devices — is a burgeoning topic for public-sector agencies. Military, civilian and educational IT shops are increasingly developing educational applications that users can tap into via their smart phone and tablets.


But there’s a lot more to it than simply porting existing materials to a smaller format. Agencies have found that mobile learning apps must be designed separately and specifically for mobile use and used to complement, rather than replace, other coursework.
All of which means it’s going to be a hot topic for some time to come, as the workforce becomes more mobile. Here are a dozen terms that will come in handy in making plans for mobile learning apps:
44 by 44 pixels. The target tap size for mobile apps, according to Apple.  44 pixels is about 7 mm, or just over a quarter of an inch.
Backchannel. The real-time online conversation that occurs alongside the primary group activity, such as tweeting a keynote at a conference or IM chats among meeting participants to share idea.
Blended learning. A learning style that combines self-paced e-learning and face-to-face or classroom instruction.
Chunking. The practice of separating training materials into brief, mobile-friendly sections to improve learner comprehension and retention.
eLearning. A broad term for self-paced or instructor-led electronically supported learning and teaching, including educational technology. It covers text, image, animation, streaming video or audio or interactive content that can be delivered by the Web, app, audio or video, TV or portable media.
Learncasting. Instructional content that is distributed via a podcast, video or syndication feed such as RSS and Atom.
Learning 2.0. Technology-aided learning that emphasizes social learning through the use of social software and tools such as blogs, wikis, Twitter and virtual worlds. 
MOODLE.  An open-source course management system, the Modular Object-Oriented Dynamic Learning Environment comes with a news forum, e-mail, discussion forums, calendaring, tracking and reporting to accommodate full online courses or in-person instruction.
Microtasking. Small discrete tasks often performed on mobile devices that can range from browsing Twitter feeds while waiting in the grocery line to studying vocabulary on the bus. Microtasking is usually considered a solitary practice, but there are some social, crowdsourced applications in which many volunteers contribute to a common goal, such as correcting errors in image-to-digital text conversion.

mLearning. Different from elearning, mobile learning leverages mobile technologies, tools and techniques to encourage learning at any time in any setting.
Reference apps. Language translators, dictionaries, visual and computational search engines and calculators are reference tools adapted for mobile platforms. Reference apps can take advantage of a smart phone’s technology, such as GPS, compass and accelerometer to enhance and personalize the learning experience.
SCORMThe Sharable Content Object Reference Model is the de facto set of technical standards for e-learning content and learning software products.

Samsung adds feature to separate work, personal data on smart phones

Businessman with smartphone showing apps for work and personal use.
Samsung has announced a new feature for its Samsung for Enterprise solution that could appeal to public-sector agencies. Called Knox, it combines platform and application security as well as mobile device management (MDM) for Samsung smart phones.
The main feature that could appeal to users and network administrators alike is the ability to keep work and personal documents and applications in two separate environments. This is fast becoming a key feature phones intended for use in Bring Your Own Device (BYOD) environments. BlackBerry takes that approach with the Balance feature in BlackBerry 10, and other companies, such as VMWare and Red Bend are making software  that does the same thing, according to Bloomberg News.
Samsung displayed its solution at the recent Mobile World Congress, and according to attendees the devices performed well. The Verge reported that “Switching between environments on the Galaxy SIII test units … is basically instantaneous – there’s no lag, no delay, no boot time.”

However, it is too early to tell which Samsung devices will work with Knox, as it requires specific on-chip memory to work properly. As Steve Patterson of Network World puts it, “Knox does not really represent a Bring Your Own Device replacement because it is hardware-dependent, so it won’t run on every Android device. So as long as there are ordinary Android and iOS devices in an enterprise, MDM will be needed for BYOD.”
So, to fully implement the network security features it would be necessary to run it under the auspices of Samsung for Enterprise. Nevertheless, having a split-environment, secure Android platform is still likely to attract the interest of agencies.
Custom Search
Powered By Blogger