Computational imaging method identifies

MIT researchers and their colleagues are designing an imaging system that can read closed books.

In the latest issue of Nature Communications, the researchers describe a prototype of the system, which they tested on a stack of papers, each with one letter printed on it. The system was able to correctly identify the letters on the top nine sheets.

“The Metropolitan Museum in New York showed a lot of interest in this, because they want to, for example, look into some antique books that they don’t even want to touch,” says Barmak Heshmat, a research scientist at the MIT Media Lab and corresponding author on the new paper. He adds that the system could be used to analyze any materials organized in thin layers, such as coatings on machine parts or pharmaceuticals.

Heshmat is joined on the paper by Ramesh Raskar, an associate professor of media arts and sciences; Albert Redo Sanchez, a research specialist in the Camera Culture group at the Media Lab; two of the group’s other members; and by Justin Romberg and Alireza Aghasi of Georgia Tech.

The MIT researchers developed the algorithms that acquire images from individual sheets in stacks of paper, and the Georgia Tech researchers developed the algorithm that interprets the often distorted or incomplete images as individual letters. “It’s actually kind of scary,” Heshmat says of the letter-interpretation algorithm. “A lot of websites have these letter certifications [captchas] to make sure you’re not a robot, and this algorithm can get through a lot of them.”

Timing terahertz

The system uses terahertz radiation, the band of electromagnetic radiation between microwaves and infrared light, which has several advantages over other types of waves that can penetrate surfaces, such as X-rays or sound waves. Terahertz radiation has been widely researched for use in security screening, because different chemicals absorb different frequencies of terahertz radiation to different degrees, yielding a distinctive frequency signature for each. By the same token, terahertz frequency profiles can distinguish between ink and blank paper, in a way that X-rays can’t.

Terahertz radiation can also be emitted in such short bursts that the distance it has traveled can be gauged from the difference between its emission time and the time at which reflected radiation returns to a sensor. That gives it much better depth resolution than ultrasound.

The system exploits the fact that trapped between the pages of a book are tiny air pockets only about 20 micrometers deep. The difference in refractive index — the degree to which they bend light — between the air and the paper means that the boundary between the two will reflect terahertz radiation back to a detector.

In the researchers’ setup, a standard terahertz camera emits ultrashort bursts of radiation, and the camera’s built-in sensor detects their reflections. From the reflections’ time of arrival, the MIT researchers’ algorithm can gauge the distance to the individual pages of the book.

True signals

While most of the radiation is either absorbed or reflected by the book, some of it bounces around between pages before returning to the sensor, producing a spurious signal. The sensor’s electronics also produce a background hum. One of the tasks of the MIT researchers’ algorithm is to filter out all this “noise.”

Flexible traffic management without sacrificing speed.

Like all data networks, the networks that connect servers in giant server farms, or servers and workstations in large organizations, are prone to congestion. When network traffic is heavy, packets of data can get backed up at network routers or dropped altogether.

Also like all data networks, big private networks have control algorithms for managing network traffic during periods of congestion. But because the routers that direct traffic in a server farm need to be superfast, the control algorithms are hardwired into the routers’ circuitry. That means that if someone develops a better algorithm, network operators have to wait for a new generation of hardware before they can take advantage of it.

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and five other organizations hope to change that, with routers that are programmable but can still keep up with the blazing speeds of modern data networks. The researchers outline their system in a pair of papers being presented at the annual conference of the Association for Computing Machinery’s Special Interest Group on Data Communication.

“This work shows that you can achieve many flexible goals for managing traffic, while retaining the high performance of traditional routers,” says Hari Balakrishnan, the Fujitsu Professor in Electrical Engineering and Computer Science at MIT. “Previously, programmability was achievable, but nobody would use it in production, because it was a factor of 10 or even 100 slower.”

“You need to have the ability for researchers and engineers to try out thousands of ideas,” he adds. “With this platform, you become constrained not by hardware or technological limitations, but by your creativity. You can innovate much more rapidly.”

The first author on both papers is Anirudh Sivaraman, an MIT graduate student in electrical engineering and computer science, advised by both Balakrishnan and Mohammad Alizadeh, the TIBCO Career Development Assistant Professor in Electrical Engineering and Computer Science at MIT, who are coauthors on both papers. They’re joined by colleagues from MIT, the University of Washington, Barefoot Networks, Microsoft Research, Stanford University, and Cisco Systems.

Different strokes

Traffic management can get tricky because of the different types of data traveling over a network, and the different types of performance guarantees offered by different services. With Internet phone calls, for instance, delays are a nuisance, but the occasional dropped packet — which might translate to a missing word in a sentence — could be tolerable. With a large data file, on the other hand, a slight delay could be tolerable, but missing data isn’t.

Similarly, a network may guarantee equal bandwidth distribution among its users. Every router in a data network has its own memory bank, called a buffer, where it can queue up packets. If one user has filled a router’s buffer with packets from a single high-definition video, and another is trying to download a comparatively tiny text document, the network might want to bump some of the video packets in favor of the text, to help guarantee both users a minimum data rate.

A router might also want to modify a packet to convey information about network conditions, such as whether the packet encountered congestion, where, and for how long; it might even want to suggest new transmission rates for senders.

Computer scientists have proposed hundreds of traffic management schemes involving complex rules for determining which packets to admit to a router and which to drop, in what order to queue the packets, and what additional information to add to them — all under a variety of different circumstances. And while in simulations many of these schemes promise improved network performance, few of them have ever been deployed, because of hardware constraints in routers.

Practical applications for non native English speakers

After thousands of hours of work, MIT researchers have released the first major database of fully annotated English sentences written by non-native speakers.

The researchers who led the project had already shown that the grammatical quirks of non-native speakers writing in English could be a source of linguistic insight. But they hope that their dataset could also lead to applications that would improve computers’ handling of spoken or written language of non-native English speakers.

“English is the most used language on the Internet, with over 1 billion speakers,” says Yevgeni Berzak, a graduate student in electrical engineering and computer science, who led the new project. “Most of the people who speak English in the world or produce English text are non-native speakers. This characteristic is often overlooked when we study English scientifically or when we do natural-language processing for English.”

Most natural-language-processing systems, which enable smartphone and other computer applications to process requests phrased in ordinary language, are based on machine learning, in which computer systems look for patterns in huge sets of training data. “If you want to handle noncanonical learner language, in terms of the training material that’s available to you, you can only train on standard English,” Berzak explains.

Systems trained on nonstandard English, on the other hand, could be better able to handle the idiosyncrasies of non-native English speakers, such as tendencies to drop or add prepositions, to substitute particular tenses for others, or to misuse particular auxiliary verbs. Indeed, the researchers hope that their work could lead to grammar-correction software targeted to native speakers of other languages.

Diagramming sentences

The researchers’ dataset consists of 5,124 sentences culled from exam essays written by students of English as a second language (ESL). The sentences were drawn, in approximately equal distribution, from native speakers of 10 languages that are the primary tongues of roughly 40 percent of the world’s population.

Every sentence in the dataset includes at least one grammatical error. The original source of the sentences was a collection made public by Cambridge University, which included annotation of the errors, but no other grammatical or syntactic information.

To provide that additional information, Berzak recruited a group of MIT undergraduate and graduate students from the departments of Electrical Engineering and Computer Science (EECS), Linguistics, and Mechanical Engineering, led by Carolyn Spadine, a graduate student in linguistics.

After eight weeks of training in how to annotate both grammatically correct and error-ridden sentences, the students began working directly on the data. There are three levels of annotation. The first involves basic parts of speech — whether a word is a noun, a verb, a preposition, and so on. The next is a more detailed description of parts of speech — plural versus singular nouns, verb tenses, comparative and superlative adjectives, and the like.

Next, the annotators charted the syntactic relationships between the words of the sentences, using a relatively new annotation scheme called the Universal Dependency formalism. Syntactic relationships include things like which nouns are the objects of which verbs, which verbs are auxiliaries of other verbs, which adjectives modify which nouns, and so on.

The annotators created syntactic charts for both the corrected and uncorrected versions of each sentence. That required some prior conceptual work, since grammatical errors can make words’ syntactic roles difficult to interpret.

Berzak and Spadine wrote a 20-page guide to their annotation scheme, much of which dealt with the handling of error-ridden sentences. Consistency in the treatment of such sentences is essential to any envisioned application of the dataset: A machine-learning system can’t learn to recognize an error if the error is described differently in different training examples.

Repeatable results

The researchers’ methodology, however, provides good evidence that annotators can chart ungrammatical sentences consistently. For every sentence, one evaluator annotated it completely; another reviewed the annotations and flagged any areas of disagreement; and a third ruled on the disagreements.

There was some disagreement on how to handle ungrammatical sentences — but there was some disagreement on how to handle grammatical sentences, too. In general, levels of agreement were comparable for both types of sentences.

The researchers report these and other results in a paper being presented at the Association for Computational Linguistics annual conference in August. Joining Berzak and Spadine on the paper are Boris Katz, who is Berzak’s advisor and a principal research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory; and the undergraduate annotators:Jessica Kenney, Jing Xian Wang, Lucia Lam, Keiko Sophie Mori, and Sebastian Garza.

The researchers’ dataset is now one of the 59 datasets available from the organization that oversees the Universal Dependency (UD) standard. Berzak also created an online interface for the dataset, so that researchers can look for particular kinds of errors, in sentences produced by native speakers of particular languages, and the like.

“What I find most interesting about the ESL [dataset] is that the use of UD opens up a lot of possibilities for systematically comparing the ESL data not only to native English but also to other languages that have corpora annotated using UD,” says Joakim Nivre, a professor of computational linguistics at Uppsala University in Sweden and one of the developers of the UD standard. “Hopefully, other ESL researchers will follow their example, which will enable further comparisons along several dimensions, ESL to ESL, ESL to native, et cetera.”

The Computer Science and Artificial Intelligence

There are few things more frustrating than trying to use your phone on a crowded network. With phone usage growing faster than wireless spectrum, we’re all now fighting over smaller and smaller bits of bandwidth. Spectrum crunch is such a big problem that the White House is getting involved, recently announcing both a $400 million research initiative and a $4 million global competition devoted to the issue.

But researchers from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) say that they have a possible solution. In a new paper, a team led by professor Dina Katabi demonstrate a system called MegaMIMO 2.0 that can transfer wireless data more than three times faster than existing systems while also doubling the range of the signal.

The soon-to-be-commercialized system’s key insight is to coordinate multiple access points at the same time, on the same frequency, without creating interference. This means that MegaMIMO 2.0 could dramatically improve the speed and strength of wireless networks, particularly at high-usage events like concerts, conventions and football games.

“In today’s wireless world, you can’t solve spectrum crunch by throwing more transmitters at the problem, because they will all still be interfering with one another,” says Ezzeldin Hamed, a PhD student who is lead author on a new paper on the topic. “The answer is to have all those access points work with each other simultaneously to efficiently use the available spectrum.”

To test MegaMIMO 2.0’s performance, the researchers created a mock conference room with a set of four laptops that each roamed the space atop Roomba robots. The experiments found that the system could increase the devices’ data-transfer speed 330 percent.

MegaMIMO 2.0’s hardware is the size of a standard router, and consists of a processor, a real-time baseband processing system, and a transceiver board.

Katabi and Hamed co-wrote the paper with Hariharan Rahul SM ’99, PhD ’13, an alum of Katabi’s group and visiting researcher with the group, as well as visiting student Mohammed A. Albdelghany. Rahul will present the paper at next week’s conference for the Association for Computing Machinery’s Special Interest Group on Data Communications (SIGCOMM 16).

How it works

The main reason that your smartphone works so speedily is multiple-input multiple-output (MIMO), which means that it uses several transmitters and receivers at the same time. Radio waves bounce off surfaces and therefore arrive at the receivers at slightly different times; devices with multiple receivers, then, are able to combine the various streams to transmit data much faster. For example, a router with three antennas works twice as fast as one with two antennas.

But in a world of limited bandwidth, these speeds are still not as fast as they could be, and so in recent years researchers have searched for the wireless industry’s Holy Grail: being able to coordinate several routers at once so that they can triangulate the data even faster and more consistently.

“The problem is that, just like how two radio stations can’t play music over the same frequency at the same time, multiple routers cannot transfer data on the same chunk of spectrum without creating major interference that muddies the signal,” says Rahul.

For the CSAIL team, the missing piece to the puzzle was a new technique for coordinating multiple transmitters by synchronizing their phases. The team developed special signal-processing algorithms that allow multiple independent transmitters to transmit data on the same piece of spectrum to multiple independent receivers without interfering with each other.

Analysis could give city planners timelier

In making decisions about infrastructure development and resource allocation, city planners rely on models of how people move through their cities, on foot, in cars, and on public transportation. Those models are largely based on surveys of residents’ travel habits.

But conducting surveys and analyzing their results is costly and time consuming: A city might go more than a decade between surveys. And even a broad survey will cover only a tiny fraction of a city’s population.

In the latest issue of the Proceedings of the National Academy of Sciences, researchers from MIT and Ford Motor Company describe a new computational system that uses cellphone location data to infer urban mobility patterns. Applying the system to six weeks of data from residents of the Boston area, the researchers were able to quickly assemble the kind of model of urban mobility patterns that typically takes years to build.

The system holds the promise of not only more accurate and timely data about urban mobility but the ability to quickly determine whether particular attempts to address cities’ transportation needs are working.

“In the U.S., every metropolitan area has an MPO, which is a metropolitan planning organization, and their main job is to use travel surveys to derive the travel demand model, which is their baseline for predicting and forecasting travel demand to build infrastructure,” says Shan Jiang, a postdoc in the Human Mobility and Networks Lab in MIT’s Department of Civil and Environmental Engineering and first author on the new paper. “So our method and model could be the next generation of tools for the planners to plan for the next generation of infrastructure.”

To validate their new system, the researchers compared the model it generated to the model currently used by Boston’s MPO. The two models accorded very well.

“The great advantage of our framework is that it learns mobility features from a large number of users, without having to ask them directly about their mobility choices,” says Marta González, an associate professor of civil and environmental engineering (CEE) at MIT and senior author on the paper. “Based on that, we create individual models to estimate complete daily trajectories of the vast majority of mobile-phone users. Likely, in time, we will see that this brings the comparative advantage of making urban transportation planning faster and smarter and even allows directly communicating recommendations to device users.”

Joining Jiang and González on the paper are Daniele Veneziano, a professor of CEE at MIT; Yingxiang Yang, a graduate student in CEE; Siddharth Gupta, a research assistant in the Human Mobility and Networks Lab, which González leads; and Shounak Athavale, an information technology manager at Ford Motor’s Palo Alto Research and Innovation Center.

Electrical Engineering and Computer Science

Nancy Lynch, the NEC Professor of Software Science and Engineering, has been appointed as associate head of the Department of Electrical Engineering and Computer Science (EECS), effective September 1.

Lynch is known for her fundamental contributions to the foundations of distributed computing. Her work applies a mathematical approach to explore the inherent limits on computability and complexity in distributed systems.

Her best-known research is the “FLP” impossibility result for distributed consensus in the presence of process failures. Other research includes the I/O automata system modeling frameworks. Lynch’s recent work focuses on wireless network algorithms and biological distributed algorithms.

The longtime head of the Theory of Distributed Systems (TDS) research group in the Computer Science and Artificial Intelligence Laboratory (CSAIL), Lynch joined MIT in 1981. She received her BS from Brooklyn College in 1968 and her PhD from MIT in 1972, both in mathematics. Recently, Lynch served as head of CSAIL’s Theory of Computation (TOC) group for several years.

She is also the author of several books and textbooks, including the graduate textbook Distributed Algorithms, considered a standard reference in the field. Lynch has also has co-authored several hundred articles about distributed algorithms and impossibility results, and about formal modeling and verification of distributed systems. She is the recipient of numerous awards, an ACM Fellow, a Fellow of the American Academy of Arts and Sciences, and a member of both the National Academy of Sciences and the National Academy of Engineering.

Lynch succeeds Silvio Micali, the Ford Professor of Computer Science and Engineering, who has served as associate department head since January 2015.

“Silvio brought his characteristic diligence and energy to all aspects of his work as department head,” said Anantha Chandrakasan, EECS department head and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “I would like to extend my sincere thanks and express my appreciation for his tremendous service.”