High precision under adverse weather conditions

During a mid-March snowstorm, researchers from MIT Lincoln Laboratory achieved real-time, nighttime, centimeter-level vehicle localization while driving a test vehicle at highway speeds over roads whose lane markings were hidden by the snow. The sport utility vehicle used in the demonstration was equipped with a system that employs a novel ground-penetrating radar technique to localize the vehicle to within centimeters of the boundaries of its lane. The technique could solve one of the problems limiting the development and adoption of self-driving vehicles: How can a vehicle navigate to stay within its lane when bad weather obscures road markings?

“Most self-localizing vehicles use optical systems to determine their position,” explains Byron Stanley, the lead researcher on Lincoln Laboratory’s localizing ground-penetrating radar (LGPR) program. “They rely on optics to ‘see’ lane markings, road surface maps, and surrounding infrastructure to orient themselves. Optical systems work well in fair weather conditions, but it is challenging, even impossible, for them to work when snow covers the markings and surfaces or precipitation obscures points of reference.”

The laboratory has developed a sensor that uses very high frequency (VHF) radar reflections of underground features to generate a baseline map of a road’s subsurface. This map is generated during an LGPR-equipped vehicle’s drive along a roadway and becomes the reference for future travels over that stretch of road. On a revisit, the LGPR mounted beneath the vehicle measures the current reflections of the road’s subsurface features, and its algorithm estimates the vehicle’s location by comparing those current GPR readings to the baseline map stored in the system’s memory. An article titled “Localizing Ground Penetrating RADAR: A Step Toward Robust Autonomous Ground Vehicle Localization” in a recent issue of the Journal of Field Robotics describes Lincoln Laboratory’s demonstration of the use of LGPR to achieve 4-cm (1.6-inch) in-lane localization accuracy at speeds up to 60 miles per hour.

“The LGPR uses relatively deep subsurface features as points of reference because these features are inherently stable and less susceptible to the dynamics of the world above,” Stanley says. Surface characteristics, such as lane striping, signs, trees, or the landscape, are dynamic; they are changed or change over time. The natural inhomogeneity in the highly static subterranean geology — for example, differences in soil layers or rock beds — dominates the GPR reflection profiles. The GPR-produced map of the subsurface environment is a fairly complete picture, capturing every distinct object and soil feature that is not significantly smaller than a quarter of a wavelength.

The LGPR’s main component is a waterproof, closely spaced 12-element antenna array that uses a very high frequency, stepped-frequency continuous wave to penetrate deeper beneath the ground than can typical GPR systems. This deeper look allows the LGPR to map the most stable, thus reliable, subsurface characteristics. “The GPR array is designed so that measurements from each element look similar and thus can be compared with others on subsequent passes,” Stanley says. A single-board computer is used to correlate the GPR data into a three-dimensional GPS-tagged GPR map and extract a corrected GPS position estimate.

The researchers who conducted the March test runs — Stanley, Jeffrey Koechling, and Henry Wegiel — alternated driving over a route in the Boston area during a snowstorm from midnight to 9 a.m. Correlation and overlap estimates demonstrated that the LGPR technique could accurately, and repeatedly, determine the vehicle’s position as it traveled. Post-processing methods on a sensor suite were then used to provide estimates of the accuracy of the system. “We thought the radar would see through snow, but it was wonderful to finally get the data proving it,” Koechling says.

Programming language is helping to solve problems

“Julia is a great tool.” That’s what New York University professor of economics and Nobel laureate Thomas J. Sargent told 250 engineers, computer scientists, programmers, and data scientists at the third annual JuliaCon held at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

If you have not yet heard of Julia, it is not a “who,” but a “what.” Developed at CSAIL, the MIT Department of Mathematics, and throughout the Julia community, it is a fast-maturing programming language developed to be simple to learn, highly dynamic, operational at the speed of C, and ranging in use from general programming to highly quantitative uses such as scientific computing, machine learning, data mining, large-scale linear algebra, and distributed and parallel computing. The language was launched open-source in 2012 and has begun to amass a large following of users and contributors.

This year’s JuliaCon, held June 21-25, was the biggest yet, and featured presentations describing how Julia is being used to solve complex problems in areas as diverse as economic modeling, spaceflight, bioinformatics, and many others.

“We are very excited about Julia because our models are complicated,” said Sargent, who is also a senior fellow at the Hoover Institution. “It’s easy to write the problem down, but it’s hard to solve it — especially if our model is high dimensional. That’s why we need Julia. Figuring out how to solve these problems requires some creativity. The guys who deserve a lot of the credit are the ones who figured out how to put this into a computer. This is a walking advertisement for Julia.” Sargent added that the reason Julia is important is because the next generation of macroeconomic models is very computationally intensive, using high-dimensional models and fitting them over extremely large data sets.

Sargent was awarded the Nobel Memorial Prize in Economic Sciences in 2011 for his work on macroeconomics. Together with John Stachurski he founded quantecon.net, a Julia- and Python-based learning platform for quantitative economics focusing on algorithms and numerical methods for studying economic problems as well as coding skills.

The Julia programming language was created and open-sourced thanks, in part, to a 2012 innovation grant awarded by the MIT Deshapnde Center for Technological Innovation. Julia combines the functionality of quantitative environments such as Matlab, R, SPSS, Stata, SAS, and Python with the speed of production programming languages like Java and C++ to solve big data and analytics problems. It delivers dramatic improvements in simplicity, speed, capacity, and productivity for data scientists, algorithmic traders, quants, scientists, and engineers who need to solve massive computation problems quickly and accurately. The number of Julia users has grown dramatically during the last five years, doubling every nine months. It is taught at MIT, Stanford University, and dozens of universities worldwide. Julia 0.5 will launch this month and Julia 1.0 in 2017.

Computational imaging method identifies

MIT researchers and their colleagues are designing an imaging system that can read closed books.

In the latest issue of Nature Communications, the researchers describe a prototype of the system, which they tested on a stack of papers, each with one letter printed on it. The system was able to correctly identify the letters on the top nine sheets.

“The Metropolitan Museum in New York showed a lot of interest in this, because they want to, for example, look into some antique books that they don’t even want to touch,” says Barmak Heshmat, a research scientist at the MIT Media Lab and corresponding author on the new paper. He adds that the system could be used to analyze any materials organized in thin layers, such as coatings on machine parts or pharmaceuticals.

Heshmat is joined on the paper by Ramesh Raskar, an associate professor of media arts and sciences; Albert Redo Sanchez, a research specialist in the Camera Culture group at the Media Lab; two of the group’s other members; and by Justin Romberg and Alireza Aghasi of Georgia Tech.

The MIT researchers developed the algorithms that acquire images from individual sheets in stacks of paper, and the Georgia Tech researchers developed the algorithm that interprets the often distorted or incomplete images as individual letters. “It’s actually kind of scary,” Heshmat says of the letter-interpretation algorithm. “A lot of websites have these letter certifications [captchas] to make sure you’re not a robot, and this algorithm can get through a lot of them.”

Timing terahertz

The system uses terahertz radiation, the band of electromagnetic radiation between microwaves and infrared light, which has several advantages over other types of waves that can penetrate surfaces, such as X-rays or sound waves. Terahertz radiation has been widely researched for use in security screening, because different chemicals absorb different frequencies of terahertz radiation to different degrees, yielding a distinctive frequency signature for each. By the same token, terahertz frequency profiles can distinguish between ink and blank paper, in a way that X-rays can’t.

Terahertz radiation can also be emitted in such short bursts that the distance it has traveled can be gauged from the difference between its emission time and the time at which reflected radiation returns to a sensor. That gives it much better depth resolution than ultrasound.

The system exploits the fact that trapped between the pages of a book are tiny air pockets only about 20 micrometers deep. The difference in refractive index — the degree to which they bend light — between the air and the paper means that the boundary between the two will reflect terahertz radiation back to a detector.

In the researchers’ setup, a standard terahertz camera emits ultrashort bursts of radiation, and the camera’s built-in sensor detects their reflections. From the reflections’ time of arrival, the MIT researchers’ algorithm can gauge the distance to the individual pages of the book.

True signals

While most of the radiation is either absorbed or reflected by the book, some of it bounces around between pages before returning to the sensor, producing a spurious signal. The sensor’s electronics also produce a background hum. One of the tasks of the MIT researchers’ algorithm is to filter out all this “noise.”

Operating A Successful Organization Is Much More Than Having The Appropriate Gear

Business people often buy the newest kinds of equipment to make sure the job can be completed correctly. Yet, this is not the one thing they’re going to need to ensure success. Though having updated gear is crucial, it’s just as vital to make certain every one of the employees are trained correctly. The proper training might make sure they understand precisely how to make use of the equipment correctly in order to obtain the best results as well as ensures they have the proper expertise in order to utilize the machines to be able to develop what is necessary.

It’s probable the folks that are appointed may already know a little bit concerning the process and exactly how it works. They likely already comprehend how to accomplish their own job as well as can perhaps do it effectively, therefore a company owner may ponder precisely why they should put money into decoupled molding training if the personnel presently knows just how to carry out their position. The reality is, almost always there is a lot more the personnel might learn and the more they find out, the more productive they can be. They should in addition understand at the very least the basics of exactly how the other jobs connect to their own so they can work together with the other staff to develop the products.

Simply knowing how to accomplish their own task can assist with the process, but knowing the complete process and also exactly how they could band together allows the employees to work together far better and this might lead to a number of benefits for the small business. Following the scientific molding training, they are going to understand exactly what to do in order to generate the product faster and with much less waste material. This will make the employees far more productive, meaning the organization might do much more in the very same amount of time, increasing profits quickly. This can help the organization grow more quickly and also suggests the total amount committed to the extra training will likely be well worth the while.

In the event your organization includes molding machines, make sure you are going to acquire additional training for your staff. Check out the injection molding training available today to understand more regarding just what your possibilities are or enroll in one of the injection molding seminars in order to permit your employees to find out a lot more concerning their particular job as well as how they may be a lot more efficient.

Any competent spreadsheet user can construct custom database

When an organization needs a new database, it typically hires a contractor to build it or buys a heavily supported product customized to its industry sector.

Usually, the organization already owns all the data it wants to put in the database. But writing complex queries in SQL or some other database scripting language to pull data from many different sources; to filter, sort, combine, and otherwise manipulate it; and to display it in an easy-to-read format requires expertise that few organizations have in-house.

New software from researchers at MIT’s Computer Science and Artificial Intelligence Laboratory could make databases much easier for laypeople to work with. The program’s home screen looks like a spreadsheet, but it lets users build their own database queries and reports by combining functions familiar to any spreadsheet user.

Simple drop-down menus let the user pull data into the tool from multiple sources. The user can then sort and filter the data, recombine it using algebraic functions, and hide unneeded columns and rows, and the tool will automatically generate the corresponding database queries.

The researchers also conducted a usability study that suggests that even in its prototype form, their tool could be easier to use than existing commercial database systems that represent thousands, if not tens of thousands, of programmer-hours of work.

“Organizations spend about $35 billion a year on relational databases,” says Eirik Bakke, an MIT graduate student in electrical engineering and computer science who led the development of the new tool. “They provide the software to store the data and to do efficient computation on the data, but they do not provide a user interface. So what inevitably ends up happening when you have something extremely industry-specific is, you have to hire a programmer who spends about a year of work to build a user interface for your particular domain.”

Familiar face

Bakke’s tool, which he developed with the help of his thesis advisor, MIT Professor of Electrical Engineering David Karger, could allow organizations to get up and running with a new database without having to wait for a custom interface. Bakke and Karger presented the tool at the Association for Computing Machinery’s International Conference on Management of Data last week.

The tool’s main drop-down menu has 17 entries, most of which — such as “hide,” “sort,” “filter,” and “delete” — will look familiar to spreadsheet users. In the conference paper, Bakke and Karger prove that those apparently simple functions are enough to construct any database query possible in SQL-92, which is the core of the version of SQL taught in most database classes.

Some database queries are simple: A company might, for instance, want a printout of the names and phone numbers of all of its customers. But it might also want a printout of the names and phone numbers of just those customers in a given zip code whose purchase totals exceeded some threshold amount over a particular time span. If each purchase has its own record in the database, the query will need to include code for summing up the purchase totals and comparing them to the threshold quantity.

What makes things even more complicated is that a database will generally store related data in different tables. For demonstration purposes, Bakke loaded several existing databases into his system. One of them, a database used at MIT to track research grants, has 35 separate tables; another, which records all the information in a university course catalogue, has 15.

Likewise, a company might store customers’ names and contact information in one table, lists of their purchase orders in another, and the items constituting each purchase order in a third. A relatively simple query that pulls up the phone numbers of everyone who bought a particular product in a particular date range could require tracking data across all three tables.

Bakke and Karger’s tool lets the user pull in individual columns from any table — say, name and phone number from the first, purchase orders and dates from the second, and products from the third. (The tool will automatically group the products associated with each purchase order together in a single spreadsheet “cell.”)

A filter function just like that found in most spreadsheet programs can restrict the date range and limit the results to those that include a particular product. The user can then hide any unnecessary columns, and the report is complete.

Anonymity if all but one of its servers are compromised

Anonymity networks protect people living under repressive regimes from surveillance of their Internet use. But the recent discovery of vulnerabilities in the most popular of these networks — Tor — has prompted computer scientists to try to come up with more secure anonymity schemes.

At the Privacy Enhancing Technologies Symposium in July, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory and the École Polytechnique Fédérale de Lausanne will present a new anonymity scheme that provides strong security guarantees but uses bandwidth much more efficiently than its predecessors. In experiments, the researchers’ system required only one-tenth as much time as similarly secure experimental systems to transfer a large file between anonymous users.

“The initial use case that we thought of was to do anonymous file-sharing, where the receiving end and sending end don’t know each other,” says Albert Kwon, a graduate student in electrical engineering and computer science and first author on the new paper. “The reason is that things like honeypotting” — in which spies offer services through an anonymity network in order to entrap its users — “are a real issue. But we also studied applications in microblogging, something like Twitter, where you want to anonymously broadcast your messages to everyone.”

The system devised by Kwon and his coauthors — his advisor, Srini Devadas, the Edwin Sibley Webster Professor of Electrical Engineering and Computer Science at MIT; David Lazar, also a graduate student in electrical engineering and computer science; and Bryan Ford SM ’02 PhD ’08, an associate professor of computer and communication sciences at the École Polytechnique Fédérale de Lausanne — employs several existing cryptographic techniques but combines them in a novel manner.

Shell game

The heart of the system is a series of servers called a mixnet. Each server permutes the order in which it receives messages before passing them on to the next. If, for instance, messages from senders Alice, Bob, and Carol reach the first server in the order A, B, C, that server would send them to the second server in a different order — say, C, B, A. The second server would permute them before sending them to the third, and so on.

An adversary that had tracked the messages’ points of origin would have no idea which was which by the time they exited the last server. It’s this reshuffling of the messages that gives the new system its name: Riffle.

Like many anonymity systems, Riffle also uses a technique known as onion encryption; “Tor,” for instance, is an acronym for “the onion router.” With onion encryption, the sending computer wraps each message in several layers of encryption, using a public-key encryption system like those that safeguard most financial transactions online. Each server in the mixnet removes only one layer of encryption, so that only the last server knows a message’s ultimate destination.

A mixnet with onion encryption is effective against a passive adversary, which can only observe network traffic. But it’s vulnerable to active adversaries, which can infiltrate servers with their own code. This is not improbable in anonymity networks, where frequently the servers are simply volunteers’ Internet-connected computers, loaded with special software.

If, for instance, an adversary that has commandeered a mixnet router wants to determine the destination of a particular message, it could simply replace all the other messages it receives with its own, bound for a single destination. Then it would passively track the one message that doesn’t follow its own prespecified route.

Public proof

To thwart message tampering, Riffle uses a technique called a verifiable shuffle. Because of the onion encryption, the messages that each server forwards look nothing like the ones it receives; it has peeled off a layer of encryption. But the encryption can be done in such a way that the server can generate a mathematical proof that the messages it sends are valid manipulations of the ones it receives.

Ensure databases used in medical research

Genome-wide association studies, which try to find correlations between particular genetic variations and disease diagnoses, are a staple of modern medical research.

But because they depend on databases that contain people’s medical histories, they carry privacy risks. An attacker armed with genetic information about someone — from, say, a skin sample — could query a database for that person’s medical data. Even without the skin sample, an attacker who was permitted to make repeated queries, each informed by the results of the last, could, in principle, extract private data from the database.

In the latest issue of the journal Cell Systems, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and Indiana University at Bloomington describe a new system that permits database queries for genome-wide association studies but reduces the chances of privacy compromises to almost zero.

It does that by adding a little bit of misinformation to the query results it returns. That means that researchers using the system could begin looking for drug targets with slightly inaccurate data. But in most cases, the answers returned by the system will be close enough to be useful.

And an instantly searchable online database of genetic data, even one that returned slightly inaccurate information, could make biomedical research much more efficient.

“Right now, what a lot of people do, including the NIH, for a long time, is take all their data — including, often, aggregate data, the statistics we’re interested in protecting — and put them into repositories,” says Sean Simmons, an MIT postdoc in mathematics and first author on the new paper. “And you have to go through a time-consuming process to get access to them.”

That process involves a raft of paperwork, including explanations of how the research enabled by the repositories will contribute to the public good, which requires careful review. “We’ve waited months to get access to various repositories,” says Bonnie Berger, the Simons Professor of Mathematics at MIT, who was Simmons’s thesis advisor and is the corresponding author on the paper. “Months.”

Bring the noise

Genome-wide association studies generally rely on genetic variations called single-nucleotide polymorphisms, or SNPs (pronounced “snips”). A SNP is a variation of one nucleotide, or DNA “letter,” at a specified location in the genome. Millions of SNPs have been identified in the human population, and certain combinations of SNPs can serve as proxies for larger stretches of DNA that tend to be conserved among individuals.

The new system, which Berger and Simmons developed together with Cenk Sahinalp, a professor of computer science at Indiana University, implements a technique called “differential privacy,” which has been a major area of cryptographic research in recent years. Differential-privacy techniques add a little bit of noise, or random variation, to the results of database searches, to confound algorithms that would seek to extract private information from the results of several, tailored, sequential searches.

The amount of noise required depends on the strength of the privacy guarantee — how low you want to set the likelihood of leaking private information — and the type and volume of data. The more people whose data a SNP database contains, the less noise the system needs to add; essentially, it’s easier to get lost in a crowd. But the more SNPs the system records, the more flexibility an attacker has in constructing privacy-compromising searches, which increases the noise requirements.

The researchers considered two types of common queries. In one, the user asks for the statistical correlation between a particular SNP and a particular disease. In the other, the user asks for a list of the SNPs in a particular region of the genome that correlate best with a particular disease.

Enable chips that use trapped ions as quantum bits

Quantum computers are largely hypothetical devices that could perform some calculations much more rapidly than conventional computers can. Instead of the bits of classical computation, which can represent 0 or 1, quantum computers consist of quantum bits, or qubits, which can, in some sense, represent 0 and 1 simultaneously.

Although quantum systems with as many as 12 qubits have been demonstrated in the lab, building quantum computers complex enough to perform useful computations will require miniaturizing qubit technology, much the way the miniaturization of transistors enabled modern computers.

Trapped ions are probably the most widely studied qubit technology, but they’ve historically required a large and complex hardware apparatus. In today’s Nature Nanotechnology, researchers from MIT and MIT Lincoln Laboratory report an important step toward practical quantum computers, with a paper describing a prototype chip that can trap ions in an electric field and, with built-in optics, direct laser light toward each of them.

“If you look at the traditional assembly, it’s a barrel that has a vacuum inside it, and inside that is this cage that’s trapping the ions. Then there’s basically an entire laboratory of external optics that are guiding the laser beams to the assembly of ions,” says Rajeev Ram, an MIT professor of electrical engineering and one of the senior authors on the paper. “Our vision is to take that external laboratory and miniaturize much of it onto a chip.”

Caged in

The Quantum Information and Integrated Nanosystems group at Lincoln Laboratory was one of several research groups already working to develop simpler, smaller ion traps known as surface traps. A standard ion trap looks like a tiny cage, whose bars are electrodes that produce an electric field. Ions line up in the center of the cage, parallel to the bars. A surface trap, by contrast, is a chip with electrodes embedded in its surface. The ions hover 50 micrometers above the electrodes.

Cage traps are intrinsically limited in size, but surface traps could, in principle, be extended indefinitely. With current technology, they would still have to be held in a vacuum chamber, but they would allow many more qubits to be crammed inside.

“We believe that surface traps are a key technology to enable these systems to scale to the very large number of ions that will be required for large-scale quantum computing,” says Jeremy Sage, who together with John Chiaverini leads Lincoln Laboratory’s trapped-ion quantum-information-processing project. “These cage traps work very well, but they really only work for maybe 10 to 20 ions, and they basically max out around there.”

Performing a quantum computation, however, requires precisely controlling the energy state of every qubit independently, and trapped-ion qubits are controlled with laser beams. In a surface trap, the ions are only about 5 micrometers apart. Hitting a single ion with an external laser, without affecting its neighbors, is incredibly difficult; only a few groups had previously attempted it, and their techniques weren’t  practical for large-scale systems.

Getting onboard

That’s where Ram’s group comes in. Ram and Karan Mehta, an MIT graduate student in electrical engineering and first author on the new paper, designed and built a suite of on-chip optical components that can channel laser light toward individual ions. Sage, Chiaverini, and their Lincoln Lab colleagues Colin Bruzewicz and Robert McConnell retooled their surface trap to accommodate the integrated optics without compromising its performance. Together, both groups designed and executed the experiments to test the new system.

“Typically, for surface electrode traps, the laser beam is coming from an optical table and entering this system, so there’s always this concern about the beam vibrating or moving,” Ram says. “With photonic integration, you’re not concerned about beam-pointing stability, because it’s all on the same chip that the electrodes are on. So now everything is registered against each other, and it’s stable.”

The researchers’ new chip is built on a quartz substrate. On top of the quartz is a network of silicon nitride “waveguides,” which route laser light across the chip. Above the waveguides is a layer of glass, and on top of that are niobium electrodes with tiny holes in them to allow light to pass through. Beneath the holes in the electrodes, the waveguides break into a series of sequential ridges, a “diffraction grating” precisely engineered to direct light up through the holes and concentrate it into a beam narrow enough that it will target a single ion, 50 micrometers above the surface of the chip.

Prototype display enables viewers to watch

Fortunately, there may be hope. In a new paper, a team from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) and Israel’s Weizmann Institute of Science have demonstrated a display that lets audiences watch 3-D films in a movie theater without extra eyewear.

Dubbed “Cinema 3D,” the prototype uses a special array of lenses and mirrors to enable viewers to watch a 3-D movie from any seat in a theater.

“Existing approaches to glasses-free 3-D require screens whose resolution requirements are so enormous that they are completely impractical,” says MIT professor Wojciech Matusik, one of the co-authors on a related paper whose first author is Weizmann PhD Netalee Efrat. “This is the first technical approach that allows for glasses-free 3-D on a large scale.”

While the researchers caution that the system isn’t currently market-ready, they are optimistic that future versions could push the technology to a place where theaters would be able to offer glasses-free alternatives for 3-D movies.

Among the paper’s co-authors are MIT research technician Mike Foshey; former CSAIL postdoc Piotr Didyk; and two Weizmann researchers that include Efrat and professor Anat Levin. Efrat will present the paper at this week’s SIGGRAPH computer-graphics conference in Anaheim, California.

Glasses-free 3-D already exists, but not in a way that scales to movie theaters. Traditional methods for TV sets use a series of slits in front of the screen (a “parallax barrier”) that allows each eye to see a different set of pixels, creating a simulated sense of depth.

But because parallax barriers have to be at a consistent distance from the viewer, this approach isn’t practical for larger spaces like theaters that have viewers at different angles and distances.

Other methods, including one from the MIT Media Lab, involve developing completely new physical projectors that cover the entire angular range of the audience. However, this often comes at a cost of lower image-resolution.

The key insight with Cinema 3D is that people in movie theaters move their heads only over a very small range of angles, limited by the width of their seat. Thus, it is enough to display images to a narrow range of angles and replicate that to all seats in the theater.

What Cinema 3D does, then, is encode multiple parallax barriers in one display, such that each viewer sees a parallax barrier tailored to their position. That range of views is then replicated across the theater by a series of mirrors and lenses within Cinema 3D’s special optics system.

“With a 3-D TV, you have to account for people moving around to watch from different angles, which means that you have to divide up a limited number of pixels to be projected so that the viewer sees the image from wherever they are,” says Gordon Wetzstein, an assistant professor of electrical engineering at Stanford University, who was not involved in the research. “The authors [of Cinema 3D] cleverly exploited the fact that theaters have a unique set-up in which every person sits in a more or less fixed position the whole time.”

The team demonstrated that their approach allows viewers from different parts of an auditorium to see images of consistently high resolution.

Cinema 3D isn’t particularly practical at the moment: The team’s prototype requires 50 sets of mirrors and lenses, and yet is just barely larger than a pad of paper. But, in theory, the technology could work in any context in which 3-D visuals would be shown to multiple people at the same time, such as billboards or storefront advertisements. Matusik says that the team hopes to build a larger version of the display and to further refine the optics to continue to improve the image resolution.

“It remains to be seen whether the approach is financially feasible enough to scale up to a full-blown theater,” says Matusik. “But we are optimistic that this is an important next step in developing glasses-free 3-D for large spaces like movie theaters and auditoriums.”

Analysis of ant colony behavior

Ants, it turns out, are extremely good at estimating the concentration of other ants in their vicinity. This ability appears to play a role in several communal activities, particularly in the voting procedure whereby an ant colony selects a new nest.

Biologists have long suspected that ants base their population-density estimates on the frequency with which they — literally — bump into other ants while randomly exploring their environments.

That theory gets new support from a theoretical paper that researchers from MIT’s Computer Science and Artificial Intelligence Laboratory will present at the Association for Computing Machinery’s Symposium on Principles of Distributed Computing conference later this month. The paper shows that observations from random exploration of the environment converge very quickly on an accurate estimate of population density. Indeed, they converge about as quickly as is theoretically possible.

Beyond offering support for biologists’ suppositions, this theoretical framework also applies to the analysis of social networks, of collective decision making among robot swarms, and of communication in ad hoc networks, such as networks of low-cost sensors scattered in forbidding environments.

“It’s intuitive that if a bunch of people are randomly walking around an area, the number of times they bump into each other will be a surrogate of the population density,” says Cameron Musco, an MIT graduate student in electrical engineering and computer science and a co-author on the new paper. “What we’re doing is giving a rigorous analysis behind that intuition, and also saying that the estimate is a very good estimate, rather than some coarse estimate. As a function of time, it gets more and more accurate, and it goes nearly as fast as you would expect you could ever do.”

Random walks

Musco and his coauthors — his advisor, NEC Professor of Software Science and Engineering Nancy Lynch, and Hsin-Hao Su, a postdoc in Lynch’s group — characterize an ant’s environment as a grid, with some number of other ants scattered randomly across it. The ant of interest — call it the explorer — starts at some cell of the grid and, with equal probability, moves to one of the adjacent cells. Then, with equal probability, it moves to one of the cells adjacent to that one, and so on. In statistics, this is referred to as a “random walk.” The explorer counts the number of other ants inhabiting every cell it visits.

In their paper, the researchers compare the random walk to random sampling, in which cells are selected from the grid at random and the number of ants counted. The accuracy of both approaches improves with each additional sample, but remarkably, the random walk converges on the true population density virtually as quickly as random sampling does.

That’s important because in many practical cases, random sampling isn’t an option. Suppose, for instance, that you want to write an algorithm to analyze an online social network — say, to estimate what fraction of the network self-describes as Republican. There’s no publicly available list of the network’s members; the only way to explore it is to pick an individual member and start tracing connections.

Similarly, in ad hoc networks, a given device knows only the locations of the devices in its immediate vicinity; it doesn’t know the layout of the network as a whole. An algorithm that uses random walks to aggregate information from multiple devices would be much easier to implement than one that has to characterize the network as a whole.

Language for programming efficient simulations

Computer simulations of physical systems are common in science, engineering, and entertainment, but they use several different types of tools.

If, say, you want to explore how a crack forms in an airplane wing, you need a very precise physical model of the crack’s immediate vicinity. But if you want to simulate the flexion of an airplane wing under different flight conditions, it’s more practical to use a simpler, higher-level description of the wing.

If, however, you want to model the effects of wing flexion on the crack’s propagation, or vice versa, you need to switch back and forth between these two levels of description, which is difficult not only for computer programmers but for computers, too.

A team of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory, Adobe, the University of California at Berkeley, the University of Toronto, Texas A&M, and the University of Texas have developed a new programming language that handles that switching automatically.

In experiments, simulations written in the language were dozens or even hundreds of times as fast as those written in existing simulation languages. But they required only one-tenth as much code as meticulously hand-optimized simulations that could achieve similar execution speeds.

“The story of this paper is that the trade-off between concise code and good performance is false,” says Fredrik Kjolstad, an MIT graduate student in electrical engineering and computer science and first author on a new paper describing the language. “It’s not necessary, at least for the problems that this applies to. But it applies to a large class of problems.”

Indeed, Kjolstad says, the researchers’ language has applications outside physical simulation, in machine learning, data analytics, optimization, and robotics, among other areas. Kjolstad and his colleagues have already used the language to implement a version of Google’s original PageRank algorithm for ordering search results, and they’re currently collaborating with researchers in MIT’s Department of Physics on an application in quantum chromodynamics, a theory of the “strong force” that holds atomic nuclei together.

“I think this is a language that is not just going to be for physical simulations for graphics people,” says Saman Amarasinghe, Kjolstad’s advisor and a professor of electrical engineering and computer science (EECS). “I think it can do a lot of other things. So we are very optimistic about where it’s going.”

Kjolstad presented the paper in July at the Association for Computing Machinery’s Siggraph conference, the major conference in computer graphics. His co-authors include Amarasinghe; Wojciech Matusik, an associate professor of EECS; and Gurtej Kanwar, who was an MIT undergraduate when the work was done but is now an MIT PhD student in physics.

Graphs vs. matrices

As Kjolstad explains, the distinction between the low-level and high-level descriptions of physical systems is more properly described as the distinction between descriptions that use graphs and descriptions that use linear algebra.

In this context, a graph is a mathematical structure that consists of nodes, typically represented by circles, and edges, typically represented as line segments connecting the nodes. Edges and nodes can have data associated with them. In a physical simulation, that data might describe tiny triangles or tetrahedra that are stitched together to approximate the curvature of a smooth surface. Low-level simulation might require calculating the individual forces acting on, say, every edge and face of each tetrahedron.

Linear algebra instead represents a physical system as a collection of points, which exert forces on each other. Those forces are described by a big grid of numbers, known as a matrix. Simulating the evolution of the system in time involves multiplying the matrix by other matrices, or by vectors, which are individual rows or columns of numbers.

Matrix manipulations are second nature to many scientists and engineers, and popular simulation software such as MatLab provides a vocabulary for describing them. But using MatLab to produce graphical models requires special-purpose code that translates the forces acting on, say, individual tetrahedra into a matrix describing interactions between points. For every frame of a simulation, that code has to convert tetrahedra to points, perform matrix manipulations, then map the results back onto tetrahedra. This slows the simulation down drastically.

So programmers who need to factor in graphical descriptions of physical systems will often write their own code from scratch. But manipulating data stored in graphs can be complicated, and tracking those manipulations requires much more code than matrix manipulation does. “It’s not just that it’s a lot of code,” says Kjolstad. “It’s also complicated code.”

Automatic translation

Kjolstad and his colleagues’ language, which is called Simit, requires the programmer to describe the translation between the graphical description of a system and the matrix description. But thereafter, the programmer can use the language of linear algebra to program the simulation.