Monday, 29 July 2013

DOM & SAX & XML

Document Object Model

The Document Object Model (DOM) is a cross-platform and language-independent convention for representing and interacting with objects in HTML, XHTML and XML documents. Objects in the DOM tree may be addressed and manipulated by using methods on the objects. The public interface of a DOM is specified in its application programming interface (API). The history of the Document Object Model is intertwined with the history of the "browser wars" of the late 1990s between Netscape Navigator and Microsoft Internet Explorer, as well as with that of JavaScript and JScript, the first scripting languages to be widely implemented in the layout engines of web browsers.

History:
Legacy DOM

JavaScript was released by Netscape Communications in 1996 within Netscape Navigator 2.0. Netscape's competitor, Microsoft, released Internet Explorer 3.0 later the same year with a port of JavaScript called JScript. JavaScript and JScript let web developers create web pages with client-side interactivity. The limited facilities for detecting user-generated events and modifying the HTML document in the first generation of these languages eventually became known as "DOM Level 0" or "Legacy DOM." No independent standard was developed for DOM Level 0, but it was partly described in the specification of HTML 4.
Legacy DOM was limited in the kinds of elements that could be accessed. Form, link and image elements could be referenced with a hierarchical name that began with the root document object. A hierarchical name could make use of either the names or the sequential index of the traversed elements. For example, a form input element could be accessed as either "document.formName.inputName" or "document.forms[0].elements[0]."
The Legacy DOM enabled client-side form validation and the popular "rollover" effect.

Intermediate DOM

In 1997, Netscape and Microsoft released version 4.0 of Netscape Navigator and Internet Explorer respectively, adding support for Dynamic HTML (DHTML), functionality enabling changes to a loaded HTML document. DHTML required extensions to the rudimentary document object that was available in the Legacy DOM implementations. Although the Legacy DOM implementations were largely compatible since JScript was based on JavaScript, the DHTML DOM extensions were developed in parallel by each browser maker and remained incompatible. These versions of the DOM became known as the "Intermediate DOM."



Standardization

The World Wide Web Consortium (W3C), founded in 1994 to promote open standards for the World Wide Web, brought Netscape Communications and Microsoft together with other companies to develop a standard for browser scripting languages, called "ECMAScript." The first version of the standard was published in 1997. Subsequent releases of JavaScript and JScript would implement the ECMAScript standard for greater cross-browser compatibility.
After the release of ECMAScript, W3C began work on a standardized DOM. The initial DOM standard, known as "DOM Level 1," was recommended by W3C in late 1998. About the same time, Internet Explorer 5.0 shipped with limited support for DOM Level 1. DOM Level 1 provided a complete model for an entire HTML or XML document, including means to change any portion of the document. Non-conformant browsers such as Internet Explorer 4.x and Netscape 4.x were still widely used as late as 2000. DOM Level 2 was published in late 2000. It introduced the "getElementById" function as well as an event model and support for XML namespaces and CSS. DOM Level 3, the current release of the DOM specification, published in April 2004, added support for XPath and keyboard event handling, as well as an interface for serializing documents as XML.
DOM Level 4 is currently being developed. Draft version 6 was released in December 2012.
By 2005, large parts of W3C DOM were well-supported by common ECMAScript-enabled browsers, including Microsoft Internet Explorer version 6 (from 2001), Opera, Safari and Gecko-based browsers (like Mozilla, Firefox, SeaMonkey and Camino).

Applications
Web browsers:

To render a document such as an HTML page, most web browsers use an internal model similar to the DOM. The nodes of every document are organized in a tree structure, called the DOM tree, with topmost node named "Document object". When an HTML page is rendered in browsers, the browser downloads the HTML into local memory and automatically parses it to display the page on screen. The DOM is also the way JavaScript transmits the state of the browser in HTML pages.

Implementations

Because DOM supports navigation in any direction (e.g., parent and previous sibling) and allows for arbitrary modifications, an implementation must at least buffer the document that has been read so far (or some parsed form of it).

Layout engines

Web browsers rely on layout engines to parse HTML into a DOM. Some layout engines such as Trident/MSHTML and Presto are associated primarily or exclusively with a particular browser such as Internet Explorer and Opera respectively. Others, such as WebKit and Gecko, are shared by a number of browsers, such as Google Chrome, Firefox and Safari. The different layout engines implement the DOM standards to varying degrees of compliance.


Simple API for XML

SAX (Simple API for XML) is an event-based sequential access parser API developed by the XML-DEV mailing list for XML documents. SAX provides a mechanism for reading data from an XML document that is an alternative to that provided by the Document Object Model (DOM). Where the DOM operates on the document as a whole, SAX parsers operate on each piece of the XML document sequentially.

Definition

Unlike DOM, there is no formal specification for SAX. The Java implementation of SAX is considered to be normative. SAX processes documents state-dependently, in contrast to DOM which is used for state-independent processing of XML documents.

Benefits

SAX parsers have some benefits over DOM-style parsers. A SAX parser only needs to report each parsing event as it happens, and normally discards almost all of that information once reported (it does, however, keep some things, for example a list of all elements that have not been closed yet, in order to catch later errors such as end-tags in the wrong order). Thus, the minimum memory required for a SAX parser is proportional to the maximum depth of the XML file (i.e., of the XML tree) and the maximum data involved in a single XML event (such as the name and attributes of a single start-tag, or the content of a processing instruction, etc.).
This much memory is usually considered negligible. A DOM parser, in contrast, typically builds a tree representation of the entire document in memory to begin with, thus using memory that increases with the entire document length. This takes considerable time and space for large documents (memory allocation and data-structure construction take time). The compensating advantage, of course, is that once loaded any part of the document can be accessed in any order.
Because of the event-driven nature of SAX, processing documents is generally far faster than DOM-style parsers, so long as the processing can be done in a start-to-end pass. Many tasks, such as indexing, conversion to other formats, very simple formatting, and the like, can be done that way. Other tasks, such as sorting, rearranging sections, getting from a link to its target, looking up information on one element to help process a later one, and the like, require accessing the document structure in complex orders and will be much faster with DOM than with multiple SAX passes.
Some implementations do not neatly fit either category: a DOM approach can keep its persistent data on disk, cleverly organized for speed (editors such as SoftQuad Author/Editor and large-document browser/indexers such as DynaText do this); while a SAX approach can cleverly cache information for later use (any validating SAX parser keeps more information than described above). Such implementations blur the DOM/SAX tradeoffs, but are often very effective in practice.
Due to the nature of DOM, streamed reading from disk requires techniques such as lazy evaluation, caches, virtual memory, persistent data structures, or other techniques (one such technique is disclosed in ). Processing XML documents larger than main memory is sometimes thought impossible because some DOM parsers do not allow it. However, it is no less possible than sorting a dataset larger than main memory using disk space as memory to sidestep this limitation.

Drawbacks

The event-driven model of SAX is useful for XML parsing, but it does have certain drawbacks.Virtually any kind of XML validation requires access to the document in full. The most trivial example is that an attribute declared in the DTD to be of type IDREF, requires that there be an element in the document that uses the same value for an ID attribute. To validate this in a SAX parser, one must keep track of all ID attributes (any one of them might end up being referenced by an IDREF attribute at the very end); as well as every IDREF attribute until it is resolved. Similarly, to validate that each element has an acceptable sequence of child elements, information about what child elements have been seen for each parent, must be kept until the parent closes.
Additionally, some kinds of XML processing simply require having access to the entire document. XSLT and XPath, for example, need to be able to access any node at any time in the parsed XML tree. Editors and browsers likewise need to be able to display, modify, and perhaps re-validate at any time. While a SAX parser may well be used to construct such a tree initially, SAX provides no help for such processing as a whole.

XML processing with SAX

A parser that implements SAX (i.e., a SAX Parser) functions as a stream parser, with an event-driven API. The user defines a number of callback methods that will be called when events occur during parsing. The SAX events include (among others):

·        XML Text nodes
·        XML Element Starts and Ends
·        XML Processing Instructions
·        XML Comments

Some events correspond to XML objects that are easily returned all at once, such as comments. However, XML elements can contain many other XML objects, and so SAX represents them as does XML itself: by one event at the beginning, and another at the end. Properly speaking, the SAX interface does not deal in elements, but in events that largely correspond to tags. SAX parsing is unidirectional; previously parsed data cannot be re-read without starting the parsing operation again.
There are many SAX-like implementations in existence. In practice, details vary, but the overall model is the same. For example, XML attributes are typically provided as name and value arguments passed to element events, but can also be provided as separate events, or via a hash or similar collection of all the attributes. For another, some implementations provide "Init" and "Fin" callbacks for the very start and end of parsing; others don't. The exact names for given event types also vary slightly between implementations.

XML & ITS APPLICATIONS

Extensible Markup Language (XML) is a markup language that defines a set of rules for encoding documents in a format that is both human-readable and machine-readable. It is defined in the XML 1.0 Specification produced by the W3C, and several other related specifications,all gratis open standards.
The design goals of XML emphasize simplicity, generality, and usability over the Internet.It is a textual data format with strong support via Unicode for the languages of the world. Although the design of XML focuses on documents, it is widely used for the representation of arbitrary data structures, for example in web services.
Many application programming interfaces (APIs) have been developed to aid software developers with processing XML data, and several schema systems exist to aid in the definition of XML-based languages.
As of 2009, hundreds of document formats using XML syntax have been developed, including RSS, Atom, SOAP, and XHTML. XML-based formats have become the default for many office-productivity tools, including Microsoft Office (Office Open XML), OpenOffice.org and LibreOffice (OpenDocument), and Apple's iWork. XML has also been employed as the base language for communication protocols, such as XMPP.

Well-formedness and error-handling

The XML specification defines an XML document as a well-formed text - meaning that it satisfies a list of syntax rules provided in the specification. Some key points in the fairly lengthy list include:
·        The document contains only properly encoded legal Unicode characters
·        None of the special syntax characters such as < and & appear except when performing their markup-delineation roles
·        The begin, end, and empty-element tags that delimit the elements are correctly nested, with none missing and none overlapping
·        The element tags are case-sensitive; the beginning and end tags must match exactly. Tag names cannot contain any of the characters !"#$%&'()*+,/;<=>?@[\]^`{|}~, nor a space character, and cannot start with -, ., or a numeric digit.
·        A single "root" element contains all the other elements
The definition of an XML document excludes texts that contain violations of well-formedness rules; they are simply not XML. An XML processor that encounters such a violation is required to report such errors and to cease normal processing. This policy, occasionally referred to as draconian, stands in notable contrast to the behavior of programs that process HTML, which are designed to produce a reasonable result even in the presence of severe markup errors.[14] XML's policy in this area has been criticized as a violation of Postel's law ("Be conservative in what you send; be liberal in what you accept").[15]
The XML specification defines a valid XML document as a well-formed XML document which also conforms to the rules of a Document Type Definition (DTD). By extension, the term can also refer to documents that conform to rules in other schema languages, such as XML Schema (XSD). This term should not be confused with a well-formed XML document, which is defined as an XML document that has correct XML syntax according to W3C standards.

Schemas and validation

In addition to being well-formed, an XML document may be valid. This means that it contains a reference to a Document Type Definition (DTD), and that its elements and attributes are declared in that DTD and follow the grammatical rules for them that the DTD specifies.
XML processors are classified as validating or non-validating depending on whether or not they check XML documents for validity. A processor that discovers a validity error must be able to report it, but may continue normal processing.A DTD is an example of a schema or grammar. Since the initial publication of XML 1.0, there has been substantial work in the area of schema languages for XML. Such schema languages typically constrain the set of elements that may be used in a document, which attributes may be applied to them, the order in which they may appear, and the allowable parent/child relationships.

software - Wireshark

Wireshark is a free and open-source packet analyzer. It is used for network troubleshooting, analysis, software and communications protocol development, and education. Originally named Ethereal, in May 2006 the project was renamed Wireshark due to trademark issues.
Wireshark is cross-platform, using the GTK+ widget toolkit to implement its user interface, and using pcap to capture packets; it runs on various Unix-like operating systems including Linux, OS X, BSD, and Solaris, and on Microsoft Windows. There is also a terminal-based (non-GUI) version called TShark. Wireshark, and the other programs distributed with it such as TShark, are free software, released under the terms of the GNU General Public License.

Functionality

Wireshark is very similar to tcpdump, but has a graphical front-end, plus some integrated sorting and filtering options.
Wireshark allows the user to put network interface controllers that support promiscuous mode into that mode, in order to see all traffic visible on that interface, not just traffic addressed to one of the interface's configured addresses and broadcast/multicast traffic. However, when capturing with a packet analyzer in promiscuous mode on a port on a network switch, not all of the traffic travelling through the switch will necessarily be sent to the port on which the capture is being done, so capturing in promiscuous mode will not necessarily be sufficient to see all traffic on the network. Port mirroring or various network taps extend capture to any point on the network. Simple passive taps are extremely resistant to tampering[citation needed].
On Linux, BSD, and OS X, with libpcap 1.0.0 or later, Wireshark 1.4 and later can also put wireless network interface controllers into monitor mode
History.In the late 1990s, Gerald Combs, a computer science graduate of the University of Missouri–Kansas City, was working for a small Internet service provider. The commercial protocol analysis products at the time were priced around $1500 and did not run on the company's primary platforms (Solaris and Linux), so Gerald began writing Ethereal and released the first version around 1998. The Ethereal trademark is owned by Network Integration Services.
In May 2006, Combs accepted a job with CACE Technologies. Combs still held copyright on most of Ethereal's source code (and the rest was re-distributable under the GNU GPL), so he used the contents of the Ethereal Subversion repository as the basis for the Wireshark repository. However, he did not own the Ethereal trademark, so he changed the name to Wireshark.[6] In 2010 Riverbed Technology purchased CACE and took over as the primary sponsor of Wireshark. Ethereal development has ceased, and an Ethereal security advisory recommended switching to Wireshark.
Wireshark has won several industry awards over the years, including eWeek, InfoWorld, and PC Magazine. It is also the top-rated packet sniffer in the Insecure.Org network security tools surveyand was the SourceForge Project of the Month in August 2010.
Combs continues to maintain the overall code of Wireshark and issue releases of new versions of the software. The product website lists over 600 additional contributing authors.

Features

Wireshark is software that "understands" the structure of different networking protocols. Thus, it is able to display the encapsulation and the fields along with their meanings of different packets specified by different networking protocols. Wireshark uses pcap to capture packets, so it can only capture the packets on the types of networks that pcap supports.

·        Data can be captured "from the wire" from a live network connection or read from a file that recorded already-captured packets.
·        Live data can be read from a number of types of network, including Ethernet, IEEE 802.11, PPP, and loopback.
·        Captured network data can be browsed via a GUI, or via the terminal (command line) version of the utility, TShark.
·        Captured files can be programmatically edited or converted via command-line switches to the "editcap" program.
·        Data display can be refined using a display filter.
·        Plug-ins can be created for dissecting new protocols.
·        VoIP calls in the captured traffic can be detected. If encoded in a compatible encoding, the media flow can even be played.
·        Raw USB traffic can be captured.
·        Wireshark's native network trace file format is the libpcap format supported by libpcap and WinPcap, so it can exchange files of captured network traces with other applications using the same format, including tcpdump and CA NetMaster. It can also read captures from other network analyzers, such as snoop, Network General's Sniffer, and Microsoft Network Monitor.


Security

Capturing raw network traffic from an interface requires elevated privileges on some platforms. For this reason, older versions of Ethereal/Wireshark and tethereal/TShark often ran with superuser privileges. Taking into account the huge number of protocol dissectors that are called when traffic is captured, this can pose a serious security risk given the possibility of a bug in a dissector. Due to the rather large number of vulnerabilities in the past (of which many have allowed remote code execution) and developers' doubts for better future development, OpenBSD removed Ethereal from its ports tree prior to OpenBSD 3.6.
Elevated privileges are not needed for all of the operations. For example, an alternative is to run tcpdump, or the dumpcap utility that comes with Wireshark, with superuser privileges to capture packets into a file, and later analyze the packets by running Wireshark with restricted privileges. To make near real time analysis, each captured file may be merged by mergecap into growing file processed by Wireshark. On wireless networks, it is possible to use the Aircrack wireless security tools to capture IEEE 802.11 frames and read the resulting dump files with Wireshark.
As of Wireshark 0.99.7, Wireshark and TShark run dumpcap to do traffic capture. On platforms where special privileges are needed to capture traffic, only dumpcap needs to be set up to run with those special privileges: neither Wireshark nor TShark need to run with special privileges, and neither of them should be run with special privileges.


PACKER SNIFFER
The basic tool for observing the messages exchanged between executing protocol entities  is called a packet sniffer. As the name suggests, a packet sniffer captures (“sniffs”)  messages being sent/received from/by your computer; it will also typically store and/or  display the contents of the various protocol fields in these captured messages. A packet
sniffer itself is passive. It observes messages being sent and received by applications and protocols running on your computer, but never sends packets itself. Similarly, received packets are never explicitly addressed to the packet sniffer. Instead, a packet sniffer receives a copy of packets that are sent / received from/by application and protocols executing on your machine.

Figure shows the structure of a packet sniffer. At the right of Figure 1 are the protocols (in this case, Internet protocols) and applications (such as a web browser or ftp client) that normally run on your computer. The packet sniffer, shown within the dashed rectangle in Figure 1 is an addition to the usual software in your computer, and consists of two parts. The packet capture library receives a copy of every link-layer frame that is sent from or received by your computer. Messages exchanged by higher layer protocols such as HTTP, FTP, TCP, UDP, DNS, or IP all are eventually encapsulated in link-layer frames that are transmitted over physical media such as an Ethernet cable. In Figure 1, the assumed physical media is an Ethernet, and so all upper layer protocols are eventually
encapsulated within an Ethernet frame. Capturing all link-layer frames thus gives you all messages sent/received from/by all protocols and applications executing in your
computer.

The second component of a packet sniffer is the packet analyzer, which displays the contents of all fields within a protocol message. In order to do so, the packet analyzer must “understand” the structure of all messages exchanged by protocols. For example, suppose we are interested in displaying the various fields in messages exchanged by the HTTP protocol in Figure 1. The packet analyzer understands the format of Ethernet  frames, and so can identify the IP datagram within an Ethernet frame. It also understands the IP datagram format, so that it can extract the TCP segment within the IP datagram. Finally, it understands the TCP segment structure, so it can extract the HTTP message contained in the TCP segment. Finally, it understands the HTTP protocol and so, for example, knows that the first bytes of an HTTP message will contain the string “GET,” “POST,” or “HEAD

Running Wireshark
When you run the Wireshark program, the Wireshark graphical user interface shown in Figure 2 will de displayed. Initially, no data will be displayed in the various windows.


The command menus are standard pulldown menus located at the top of the
window. Of interest to us now are the File and Capture menus. The File menu
allows you to save captured packet data or open a file containing previously
captured packet data, and exit the Wireshark application. The Capture menu
allows you to begin packet capture.
The packet-listing window displays a one-line summary for each packet
captured, including the packet number (assigned by Wireshark; this is not a
packet number contained in any protocol’s header), the time at which thepacket
was captured, the packet’s source and destination addresses, the protocol type,
and protocol-specific information contained in the packet. The packet listing can
be sorted according to any of these categories by clicking on a column name. The protocol type field lists the highest level protocol that sent or received this packet,

i.e., the protocol that is the source or ultimate sink for this packet.

Recent trends - No-Touch Interfaces

We’ve gotten used to the idea that computers are machines that we operate with our hands.  Just as we Gen Xers became comfortable with keyboards and mouses, Today’s millennial generation has learned to text at blazing speed.  Each new iteration of technology has required new skills to use it proficiently.
            That’s why the new trend towards no-touch interfaces is so fundamentally different.  From Microsoft’s Kinect to Apple’s Siri to Google’s Project Glass, we’re beginning to expect that computers adapt to us rather than the other way around.
The basic pattern recognition technology has been advancing for generations and, thanks to accelerating returns, we can expect computer interfaces to become almost indistinguishable from humans in little more than a decade.

Tapped Out? Moving Toward a No-Touch Future
With advances in sensors and cameras, no-touch interfaces and devices will continue to be further integrated into daily life.Smartphones such as the Pantech Perception and the upcoming Samsung Galaxy S4 are the latest devices to incorporate touchless features, with each device enabling users to browse through picture galleries or answer a phone call by just waving a hand over the smartphone screen. The Galaxy S4 also has Smart Scroll, which detects eyes and scrolls web pages based on the angle the user tilts his or her head.
Many smartphone users are already familiar with no-touch technology thanks to the wide adoption of voice recognition software in wireless devices. Smartphone users use apps like Google Now on Android and Siri on iOS for hands-free access to endless information. And now, Google Chrome has added voice recognition to its latest version, enabling features like email dictation. This technology is also being incorporated into automobiles to allow for a hands-free mobile experience for drivers.
Gesture technology is also featured in products like Kinect for Xbox. To expand this functionality to computers, Kinect for Windows was created and uses software and sensors. One app for Kinect for Windows allows surgeons to use gestures to control medical images and scans on computers, eliminating time lost when using unsterilized computers then having to scrub up again. Intel has developed a gesture-sensing device using conventional and infrared cameras, microphones and software to enable apps on computers to track a person’s fingers, recognize faces, infer emotions and interpret words spoken in nine languages.However, this is just the beginning. Mobile voice interfaces will soon be even more commonplace allowing users to talk to a device without touching it first.
TOUCH-LESS TOUCH SCREEN USER INTERFACE
                   It was the touch screens which initially created great furore.Gone are the days when you have to fiddle with the touch screens and end scratching up. Touch screen displays are ubiquitous worldwide.Frequent touching a touchscreen display with a pointing device such as a finger can result in the gradual de-sensitization of the touchscreen to input and can ultimately lead to failure of the touchscreen. To avoid this a simple user interface for Touchless control of electrically operated equipment is being developed. EllipticLabs innovative technology lets you control your gadgets like Computers, MP3 players or mobile phones without touching them. A simple user interface for Touchless control of electrically operated equipment. Unlike other systems which depend on distance to the sensor or sensor selection this system depends on hand and or finger motions, a hand wave in a certain direction, or a flick of the hand in one area, or holding the hand in one area or pointing with one finger for example. The device is based on optical pattern recognition using a solid state optical matrix sensor with a lens to detect hand motions. This sensor is then connected to a digital image processor, which interprets the patterns of motion and outputs the results as signals to control fixtures, appliances, machinery, or any device controllable through electrical signals.
   

The touch less touch screen sounds like it would be nice and easy, however after closer examination it looks like it could be quite a workout. This unique screen is made by TouchKo, White Electronics Designs and Groupe 3D. The screen resembles the Nintendo Wii without the Wii Controller. With the touchless touch screen your hand doesn’t have to come in contact with the screen at all, it works by detecting your hand movements in front of it. This is a pretty unique and interesting invention, until you break out in a sweat. Now this technology doesn’t compare to the hologram-like IO2 Technologies Heliodisplay M3, but thats for anyone that has $18,100 laying around.
You probably wont see this screen in stores any time soon. Everybody loves a touch screen and when you get a gadget with touch screen the experience is really exhilarating. When the I-phone was introduced,everyone felt the same.But gradually,the exhilaration started fading. While using the phone with the finger tip or with the stylus the screen started getting lots of finger prints and scratches. When we use a screen protector; still dirty marks over such beautiful glossy screen is a strict no-no. Same thing happens with I-pod touch. . Most of the time we have to wipe the screen to get a better unobtrusive view of the screen
TOUCH LESS MONITOR:
Sure, everybody is doing touchscreen interfaces these days, but this is the first time I’ve seen a monitor that can respond to gestures without actually having to touch the screen.The monitor, based on technology from TouchKo was recently demonstrated by White Electronic Designs and Tactyl Services at the CeBIT show. Designed for applications where touch may be difficult, such as for doctors who might be wearing surgical gloves, the display  features capacitive sensors that can read movements from up to 15cm away from the screen. Software can then translate gestures into screen commands.
Touchscreen interfaces are great, but all that touching, like foreplay, can be a little bit of a drag. Enter the wonder kids from Elliptic Labs, who are hard at work on implementing a touchless interface. The input method is, well, in thin air. The technology detects motion in 3D and requires no special worn-sensors for operation. By simply pointing at the screen,users can manipulate the object being displayed in 3D. Details are light on how this actually functions, but what we do know is this:
What is the technology behind it?
It obviously requires a sensor but the sensor is neither hand mounted nor present on the screen. The sensor can be placed either onthe table or near the screen. And the hardware setup is so compact that it can be fitted into a tiny device like a MP3 player or a mobile phone. It recognizes the position of an object from as 5 feet.
WORKING:
The system is capable of detecting movements in 3-dimensions without ever having to put your fingers on the screen. Their patented touchless interface doesn’t require that you wear any special sensors on your hand either. You just point at the screen (from as far as 5 feet away), and you can manipulate objects in 3D.                               
Sensors are mounted around the screen that is being used, by interacting in the line-of-sight of these sensors the motion is detected and interpreted into on-screen movements. What is to stop unintentional gestures being used as input is not entirely clear, but it looks promising nonetheless.Elliptic Labs says their technology will be easily small enough to be implemented into cell phones and the like.
Touch-less Gives Glimpse of    GBUI:
We have seen the futuristic user interfaces of movies like Minority Report and the Matrix Revolutions where people wave their hands in 3 dimensions and the computer understands what the user wants and shifts and sorts data with precision. Microsoft's XD Huang demonstrated how his company sees the future of the GUI at ITEXPO this past September in fact. But at the show, the example was in 2 dimensions, not3.The GBUI as seen in the Matrix
The GBUI as seen in Minority Report
Microsoft's vision on the UI in their Redmond headquarters and it involves lots of gestures which allow you to take applications and forward them on to others with simple hand movements. The demos included the concept of software understanding business processes and helping you work. So after reading a document - you could just push it off the side of your screen and the system would know to post it on an intranet and also send a link to a specific group of people.
Touch-less UI:
The basic idea described in the patent is that there would be sensors arrayed around the perimeter of the device capable of sensing finger movements in 3-D space. The user could use her fingers similarly to a touchphone, but actually without having to touch the screen.
Touch-less SDK:
The Touchless SDK is an open source SDK for .NET applications. It enables developers to create multi-touch based applications using a webcam for input. Color based markers defined by the user are tracked and their information is published through events to clients of the SDK. In a nutshell, the Touchless SDK enables touch without touching. Well, Microsoft Office Labs has just released “Touchless,” a webcam-driven multi-touch interface SDK that enables “touch without touching.”
Touch-less demo:
        The Touch less Demo is an open source application that anyone with a webcam can use to experience multi-touch, no geekiness required. The demo was created using the Touch less SDK and Windows Forms with C#. There are 4 fun demos: Snake - where you control a snake with a marker, Defender - up to 4 player version of a pong-like game, Map - where you can rotate, zoom, and move a map using 2 markers, and Draw  the marker is used to guess what….  draw!
Touch wall:
Touch Wall refers to the touch screen hardware setup itself; the corresponding software to run Touch Wall, which is built on a standard version of Vista, is called Plex. Touch Wall and Plex are superficially similar to Microsoft Surface, a multi-touch table computer that was introduced in 2007 and which recently became commercially available in select AT&T stores. It is a fundamentally simpler mechanical system, and is also significantly cheaper to produce. While Surface retails at around $10,000, the hardware to “turn almost anything into a multi-touch interface” for Touch Wall is just “hundreds of dollars”
Touch Wall consists of three infrared lasers that scan a surface. A camera notes when something breaks through the laser line and feeds that information back to the Plex software. Early prototypes, say Pratley and Sands, were made, simply, on a cardboard screen. A projector was used to show the Plex interface on the cardboard, and a the system worked fine. It’s also clear that the only real limit on the screen size is the projector, meaning that entire walls can easily be turned into a multi touch user interface. Scrap those white boards in the office, and make every flat surface into a touch display instead. You might even save some money.
What’s next?
Many personal computers will likely have similar screens in the near future. But touch interfaces are nothing new -- witness ATM machines.
How about getting completely out of touch? A startup called LM3Labs says it's working with major computer makers in Japan, Taiwan and the US to incorporate touch less navigation into their laptops, Called Airstrike; the system uses tiny charge-coupled device (CCD) cameras integrated into each side of the keyboard to detect user movements.
                 You can drag windows around or close them, for instance, by pointing and gesturing in midair above the keyboard.You should be able to buy an Airstrike-equipped laptop next year, with high-end stand-alone keyboards to follow.
              Any such system is unlikely to replace typing and mousing. But that's not the point. Airstrike aims to give you an occasional quick break from those activities.
CONCLUSION:               

     Today’s thoughts are again around user interface. Efforts are being put to better the technology day-in and day-out. The Touchless touch screen user interface can be used effectively in computers, cell phones, webcams and laptops. May be few years down the line, our body can be transformed into a virtual mouse, virtual keyboard and what not??, Our body may be turned in to an input device!

Sir Tim Berners-Lee (Inventor of World Wide Web)

Sir Timothy John "Tim" Berners-Lee, OM, KBE, FRS, FREng, FRSA (born 8 June 1955),also known as "TimBL," is a British computer scientist, best known as the inventor of the World Wide Web. He made a proposal for an information management system in March 1989,and he implemented the first successful communication between a Hypertext Transfer Protocol (HTTP) client and server via the Internet sometime around mid November.
Berners-Lee is the director of the World Wide Web Consortium (W3C), which oversees the Web's continued development. He is also the founder of the World Wide Web Foundation, and is a senior researcher and holder of the Founders Chair at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). He is a director of the Web Science Research Initiative (WSRI), and a member of the advisory board of the MIT Center for Collective Intelligence.
In 2004, Berners-Lee was knighted by Queen Elizabeth II for his pioneering work. In April 2009, he was elected a foreign associate of the United States National Academy of Sciences. He was honoured as the "Inventor of the World Wide Web" during the 2012 Summer Olympics opening ceremony, in which he appeared in person, working at a NeXT Computer at the London Olympic Stadium. He tweeted "This is for everyone”, which was instantly spelled out in LCD lights attached to the chairs of the 80,000 people in the audience.

Early life

Berners-Lee was born in southwest London, England, on 8 June 1955, one of four children born to Conway Berners-Lee and Mary Lee Woods. His parents worked on the first commercially-built computer, the Ferranti Mark 1. He attended Sheen Mount Primary School, and then went on to attend south west London's independent Emanuel School from 1969 to 1973. A keen trainspotter as a child, he learnt about electronics from tinkering with a model railway.[15] He studied at Queen's College, Oxford, from 1973 to 1976, where he received a first-class degree in physics.

Career

In 1989, while working at at CERN, the European Particle Physics Laboratory in Geneva, Switzerland, Tim Berners-lee proposed a global hypertext project, to be known as the World Wide Web. Based on the earlier "Enquire" work, it was designed to allow people to work together by combining their knowledge in a web of hypertext documents. He wrote the first World Wide Web server, "httpd", and the first client, "WorldWideWeb" a what-you-see-is-what-you-get hypertext browser/editor which ran in the NeXTStep environment. This work was started in October 1990, and the program "WorldWideWeb" first made available within CERN in December, and on the Internet at large in the summer of 1991.
Through 1991 and 1993, Tim continued working on the design of the Web, coordinating feedback from users across the Internet. His initial specifications of URIs, HTTP and HTML were refined and discussed in larger circles as the Web technology spread.
Tim Berners-Lee graduated from the Queen's College at Oxford University, England, 1976. Whilst there he built his first computer with a soldering iron, TTL gates, an M6800 processor and an old television.
He spent two years with Plessey Telecommunications Ltd (Poole, Dorset, UK) a major UK Telecom equipment manufacturer, working on distributed transaction systems, message relays, and bar code technology.
In 1978 Tim left Plessey to join D.G Nash Ltd (Ferndown, Dorset, UK), where he wrote among other things typesetting software for intelligent printers, and a multitasking operating system.
A year and a half spent as an independent consultant included a six month stint (Jun-Dec 1980)as consultant software engineer at CERN. Whilst there, he wrote for his own private use his first program for storing information including using random associations. Named "Enquire" and never published, this program formed the conceptual basis for the future development of the World Wide Web.
From 1981 until 1984, Tim worked at John Poole's Image Computer Systems Ltd, with technical design responsibility. Work here included real time control firmware, graphics and communications software, and a generic macro language. In 1984, he took up a fellowship at CERN, to work on distributed real-time systems for scientific data acquisition and system control. Among other things, he worked on FASTBUS system software and designed a heterogeneous remote procedure call system.
In 1994, Tim founded the World Wide Web Consortium at the then Laboratory for Computer Science (LCS) which merged with the Artificial Intelligence Lab in 2003 to become the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology (MIT). Since that time he has served as the Director of the World Wide Web Consortium a Web standards organization which develops interoperable technologies (specifications, guidelines, software, and tools) to lead the Web to its full potential. The Consortium has host sites located at MIT, at ERCIM in Europe, and at Keio University in Japan as well as Offices around the world.

In 1999, he became the first holder of the 3Com Founders chair. In 2008 he was named 3COM Founders Professor of Engineering in the School of Engineering, with a joint appointment in the Department of Electrical Engineering and Computer Science at CSAIL where he also heads the Decentralized Information Group (DIG). In December 2004 he was named a Professor in the Computer Science Department at the University of Southampton, UK. He was co-Director of the Web Science Trust, launched in 2006 as the Web Science Research Initiative, to help create the first multidisciplinary research body to examine the World Wide Web and offer the practical solutions needed to help guide its future use and design. He is a Director of the World Wide Web Foundation, started in 2008 to fund and coordinate efforts to further the potential of the Web to benefit humanity.
In June 2009 then Prime Minister Gordon Brown announced that he would work with the UK Government to help make data more open and accessible on the Web, building on the work of the Power of Information Task Force. Berners-Lee and Professor Nigel Shadbolt are the two key figures behind data.gov.uk, a UK Government project to open up almost all data acquired for official purposes for free re-use. Commenting on the opening up of Ordnance Survey data in April 2010 Berners-Lee said that: "The changes signal a wider cultural change in Government based on an assumption that information should be in the public domain unless there is a good reason not to—not the other way around." He went on to say "Greater openness, accountability and transparency in Government will give people greater choice and make it easier for individuals to get more directly involved in issues that matter to them."
In November 2009, Berners-Lee launched the World Wide Web Foundation in order to "Advance the Web to empower humanity by launching transformative programs that build local capacity to leverage the Web as a medium for positive change."
Berners-Lee is one of the pioneer voices in favour of Net Neutrality,[39] and has expressed the view that ISPs should supply "connectivity with no strings attached," and should neither control nor monitor customers' browsing activities without their expressed consent.He advocates the idea that net neutrality is a kind of human network right: "Threats to the Internet, such as companies or governments that interfere with or snoop on Internet traffic, compromise basic human network rights."
Berners-Lee is President of the Open Data Institute.He is the author, with Mark Fischetti, of the book "Weaving the Web" on the the past present and future of the Web.
On March 18 2013, Tim, along with Vinton Cerf, Robert Kahn, Louis Pouzin and Marc Andreesen, was awarded the Queen Elizabeth Prize for Engineering for "ground-breaking innovation in engineering that has been of global benefit to humanity."

Awards

1995:
Kilby Foundation's "Young Innovator of the Year" Award
ACM Software Systems Award (co-recipient)
Honorary Prix Ars Electronica
Distinguished Fellow of the British Computer Society
1997:
Awarded an Order of the British Empire (OBE)
IEEE Koji Kobayashi Computers and Communications Award
Duddell Medal of the Institute of Physics
Interactive Services Association's Distinguished Service Award
MCI Computerworld/Smithsonian Award for Leadership in Innovation
International Communication Institute's Columbus Prize
1999:
Named "One of the 100 greatest minds of the century" by Time Magazine
World Technology Award for Communication Technology
Honorary Fellowship, The Society for Technical Communications
2000:
Paul Evan Peters Award of ARL, Educause and CNI
Electronic Freedom Foundation's Pioneer Award
George R Stibitz Computer Pioneer Award, American Computer Museum
Special Award for Outstanding Contribution of the World Television Forum
2002:
Japan Prize, the Science and Technology Foundation of Japan
Prince of Asturias Foundation Prize for Scientific and Technical Research (shared with with Larry Roberts, Rob Kahn and Vint Cerf)
Fellow, Guglielmo Marconi Foundation
Albert Medal of the Royal Society for the Encouragement of Art, Manufactures and Commerce (RSA)
Common Wealth Award for Distinguished Service for Mass Communications
2007:
Awarded the Order of Merit by H.M. the Queen
Charles Stark Draper Prize, National Academy of Engineering
Lovelace Medal, British Computer Society
D&AD President's Award for Innovation and Creativity
MITX (Massachusetts Innovation & Technology Exchange) Leadership Award
Foreign Associate of the National Academy of Engineering
2008:
BITC Award for Excellence
IEEE/RSE Wolfson James Clerk Maxwell Award
Fellow, IEEE
Pathfinder Award, Harvard Kennedy School of Government
2009:
Foreign Associate, National Academy of Sciences
Given the title of Royal Designer by the Royal Society for the encouragement of Arts, Manufacture nd Commerce
Webby Awards Lifetime Achievement Award
2010:
UNESCO Niels Bohr Gold Medal Award
2011:
The Mikhail Gorbachev Award
DAMA Web Awards, Bilbao Web Summit
2012
Internet Hall of Fame

Honorary Degrees:

·        Parsons School of Design, New York (D.F.A., 1995)
·        Southampton University (D.Sc., 1995)
·        Southern Cross University (1998)
·        Open University (D.U., 2000)
·        University of Port Elizabeth (DSc., 2002)
·        Lancaster University (D.Sc., 2004)
·        Universitat Oberta de Catalunya (2008)
·        University of Manchester (2008)
·        Universidad Politécnica de Madrid (2009)
·        VU University Amsterdam (2009)
·        Harvard University (2011)





Selected Publications

·        Berners-Lee, T.J., et al, "World-Wide Web: Information Universe", Electronic Publishing: Research, Applications and Policy, April 1992.

·        Berners-Lee T.J., et al, "The World Wide Web", Communications of the ACM, August 1994.

·        Tim Berners-Lee with Mark Fischetti, Weaving the Web, Harper San Francisco, 1999

·        Tim Berners-Lee, Dan Connolly, Ralph R. Swick "Web Architecture: Describing and Exchanging Data", W3C Note, 1999/6-7.

·        Berners-Lee, Tim. and Hendler, James "Publishing on the Semantic Web", Nature, April 26 2001 p. 1023-1025.

·        James Hendler, Tim Berners-Lee and Eric Miller, 'Integrating Applications on the Semantic Web', Journal of the Institute of Electrical Engineers of Japan,

·        Hendler, J., Berners-Lee, T.J., and Miller, E., ' Integrating Applications on the Semantic Web ', Journal of the Institute of Electrical Engineers of Japan, Vol 122(10), October, 2002, p. 676-680.