Monthly Archives: November 2016

Enable chips that use trapped ions as quantum bits

Quantum computers are largely hypothetical devices that could perform some calculations much more rapidly than conventional computers can. Instead of the bits of classical computation, which can represent 0 or 1, quantum computers consist of quantum bits, or qubits, which can, in some sense, represent 0 and 1 simultaneously.

Although quantum systems with as many as 12 qubits have been demonstrated in the lab, building quantum computers complex enough to perform useful computations will require miniaturizing qubit technology, much the way the miniaturization of transistors enabled modern computers.

Trapped ions are probably the most widely studied qubit technology, but they’ve historically required a large and complex hardware apparatus. In today’s Nature Nanotechnology, researchers from MIT and MIT Lincoln Laboratory report an important step toward practical quantum computers, with a paper describing a prototype chip that can trap ions in an electric field and, with built-in optics, direct laser light toward each of them.

“If you look at the traditional assembly, it’s a barrel that has a vacuum inside it, and inside that is this cage that’s trapping the ions. Then there’s basically an entire laboratory of external optics that are guiding the laser beams to the assembly of ions,” says Rajeev Ram, an MIT professor of electrical engineering and one of the senior authors on the paper. “Our vision is to take that external laboratory and miniaturize much of it onto a chip.”

Caged in

The Quantum Information and Integrated Nanosystems group at Lincoln Laboratory was one of several research groups already working to develop simpler, smaller ion traps known as surface traps. A standard ion trap looks like a tiny cage, whose bars are electrodes that produce an electric field. Ions line up in the center of the cage, parallel to the bars. A surface trap, by contrast, is a chip with electrodes embedded in its surface. The ions hover 50 micrometers above the electrodes.

Cage traps are intrinsically limited in size, but surface traps could, in principle, be extended indefinitely. With current technology, they would still have to be held in a vacuum chamber, but they would allow many more qubits to be crammed inside.

“We believe that surface traps are a key technology to enable these systems to scale to the very large number of ions that will be required for large-scale quantum computing,” says Jeremy Sage, who together with John Chiaverini leads Lincoln Laboratory’s trapped-ion quantum-information-processing project. “These cage traps work very well, but they really only work for maybe 10 to 20 ions, and they basically max out around there.”

Performing a quantum computation, however, requires precisely controlling the energy state of every qubit independently, and trapped-ion qubits are controlled with laser beams. In a surface trap, the ions are only about 5 micrometers apart. Hitting a single ion with an external laser, without affecting its neighbors, is incredibly difficult; only a few groups had previously attempted it, and their techniques weren’t  practical for large-scale systems.

Getting onboard

That’s where Ram’s group comes in. Ram and Karan Mehta, an MIT graduate student in electrical engineering and first author on the new paper, designed and built a suite of on-chip optical components that can channel laser light toward individual ions. Sage, Chiaverini, and their Lincoln Lab colleagues Colin Bruzewicz and Robert McConnell retooled their surface trap to accommodate the integrated optics without compromising its performance. Together, both groups designed and executed the experiments to test the new system.

“Typically, for surface electrode traps, the laser beam is coming from an optical table and entering this system, so there’s always this concern about the beam vibrating or moving,” Ram says. “With photonic integration, you’re not concerned about beam-pointing stability, because it’s all on the same chip that the electrodes are on. So now everything is registered against each other, and it’s stable.”

The researchers’ new chip is built on a quartz substrate. On top of the quartz is a network of silicon nitride “waveguides,” which route laser light across the chip. Above the waveguides is a layer of glass, and on top of that are niobium electrodes with tiny holes in them to allow light to pass through. Beneath the holes in the electrodes, the waveguides break into a series of sequential ridges, a “diffraction grating” precisely engineered to direct light up through the holes and concentrate it into a beam narrow enough that it will target a single ion, 50 micrometers above the surface of the chip.

Programming language is helping to solve problems

“Julia is a great tool.” That’s what New York University professor of economics and Nobel laureate Thomas J. Sargent told 250 engineers, computer scientists, programmers, and data scientists at the third annual JuliaCon held at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

If you have not yet heard of Julia, it is not a “who,” but a “what.” Developed at CSAIL, the MIT Department of Mathematics, and throughout the Julia community, it is a fast-maturing programming language developed to be simple to learn, highly dynamic, operational at the speed of C, and ranging in use from general programming to highly quantitative uses such as scientific computing, machine learning, data mining, large-scale linear algebra, and distributed and parallel computing. The language was launched open-source in 2012 and has begun to amass a large following of users and contributors.

This year’s JuliaCon, held June 21-25, was the biggest yet, and featured presentations describing how Julia is being used to solve complex problems in areas as diverse as economic modeling, spaceflight, bioinformatics, and many others.

“We are very excited about Julia because our models are complicated,” said Sargent, who is also a senior fellow at the Hoover Institution. “It’s easy to write the problem down, but it’s hard to solve it — especially if our model is high dimensional. That’s why we need Julia. Figuring out how to solve these problems requires some creativity. The guys who deserve a lot of the credit are the ones who figured out how to put this into a computer. This is a walking advertisement for Julia.” Sargent added that the reason Julia is important is because the next generation of macroeconomic models is very computationally intensive, using high-dimensional models and fitting them over extremely large data sets.

Sargent was awarded the Nobel Memorial Prize in Economic Sciences in 2011 for his work on macroeconomics. Together with John Stachurski he founded quantecon.net, a Julia- and Python-based learning platform for quantitative economics focusing on algorithms and numerical methods for studying economic problems as well as coding skills.

The Julia programming language was created and open-sourced thanks, in part, to a 2012 innovation grant awarded by the MIT Deshapnde Center for Technological Innovation. Julia combines the functionality of quantitative environments such as Matlab, R, SPSS, Stata, SAS, and Python with the speed of production programming languages like Java and C++ to solve big data and analytics problems. It delivers dramatic improvements in simplicity, speed, capacity, and productivity for data scientists, algorithmic traders, quants, scientists, and engineers who need to solve massive computation problems quickly and accurately. The number of Julia users has grown dramatically during the last five years, doubling every nine months. It is taught at MIT, Stanford University, and dozens of universities worldwide. Julia 0.5 will launch this month and Julia 1.0 in 2017.

Prototype display enables viewers to watch

Fortunately, there may be hope. In a new paper, a team from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) and Israel’s Weizmann Institute of Science have demonstrated a display that lets audiences watch 3-D films in a movie theater without extra eyewear.

Dubbed “Cinema 3D,” the prototype uses a special array of lenses and mirrors to enable viewers to watch a 3-D movie from any seat in a theater.

“Existing approaches to glasses-free 3-D require screens whose resolution requirements are so enormous that they are completely impractical,” says MIT professor Wojciech Matusik, one of the co-authors on a related paper whose first author is Weizmann PhD Netalee Efrat. “This is the first technical approach that allows for glasses-free 3-D on a large scale.”

While the researchers caution that the system isn’t currently market-ready, they are optimistic that future versions could push the technology to a place where theaters would be able to offer glasses-free alternatives for 3-D movies.

Among the paper’s co-authors are MIT research technician Mike Foshey; former CSAIL postdoc Piotr Didyk; and two Weizmann researchers that include Efrat and professor Anat Levin. Efrat will present the paper at this week’s SIGGRAPH computer-graphics conference in Anaheim, California.

Glasses-free 3-D already exists, but not in a way that scales to movie theaters. Traditional methods for TV sets use a series of slits in front of the screen (a “parallax barrier”) that allows each eye to see a different set of pixels, creating a simulated sense of depth.

But because parallax barriers have to be at a consistent distance from the viewer, this approach isn’t practical for larger spaces like theaters that have viewers at different angles and distances.

Other methods, including one from the MIT Media Lab, involve developing completely new physical projectors that cover the entire angular range of the audience. However, this often comes at a cost of lower image-resolution.

The key insight with Cinema 3D is that people in movie theaters move their heads only over a very small range of angles, limited by the width of their seat. Thus, it is enough to display images to a narrow range of angles and replicate that to all seats in the theater.

What Cinema 3D does, then, is encode multiple parallax barriers in one display, such that each viewer sees a parallax barrier tailored to their position. That range of views is then replicated across the theater by a series of mirrors and lenses within Cinema 3D’s special optics system.

“With a 3-D TV, you have to account for people moving around to watch from different angles, which means that you have to divide up a limited number of pixels to be projected so that the viewer sees the image from wherever they are,” says Gordon Wetzstein, an assistant professor of electrical engineering at Stanford University, who was not involved in the research. “The authors [of Cinema 3D] cleverly exploited the fact that theaters have a unique set-up in which every person sits in a more or less fixed position the whole time.”

The team demonstrated that their approach allows viewers from different parts of an auditorium to see images of consistently high resolution.

Cinema 3D isn’t particularly practical at the moment: The team’s prototype requires 50 sets of mirrors and lenses, and yet is just barely larger than a pad of paper. But, in theory, the technology could work in any context in which 3-D visuals would be shown to multiple people at the same time, such as billboards or storefront advertisements. Matusik says that the team hopes to build a larger version of the display and to further refine the optics to continue to improve the image resolution.

“It remains to be seen whether the approach is financially feasible enough to scale up to a full-blown theater,” says Matusik. “But we are optimistic that this is an important next step in developing glasses-free 3-D for large spaces like movie theaters and auditoriums.”

Analysis of ant colony behavior

Ants, it turns out, are extremely good at estimating the concentration of other ants in their vicinity. This ability appears to play a role in several communal activities, particularly in the voting procedure whereby an ant colony selects a new nest.

Biologists have long suspected that ants base their population-density estimates on the frequency with which they — literally — bump into other ants while randomly exploring their environments.

That theory gets new support from a theoretical paper that researchers from MIT’s Computer Science and Artificial Intelligence Laboratory will present at the Association for Computing Machinery’s Symposium on Principles of Distributed Computing conference later this month. The paper shows that observations from random exploration of the environment converge very quickly on an accurate estimate of population density. Indeed, they converge about as quickly as is theoretically possible.

Beyond offering support for biologists’ suppositions, this theoretical framework also applies to the analysis of social networks, of collective decision making among robot swarms, and of communication in ad hoc networks, such as networks of low-cost sensors scattered in forbidding environments.

“It’s intuitive that if a bunch of people are randomly walking around an area, the number of times they bump into each other will be a surrogate of the population density,” says Cameron Musco, an MIT graduate student in electrical engineering and computer science and a co-author on the new paper. “What we’re doing is giving a rigorous analysis behind that intuition, and also saying that the estimate is a very good estimate, rather than some coarse estimate. As a function of time, it gets more and more accurate, and it goes nearly as fast as you would expect you could ever do.”

Random walks

Musco and his coauthors — his advisor, NEC Professor of Software Science and Engineering Nancy Lynch, and Hsin-Hao Su, a postdoc in Lynch’s group — characterize an ant’s environment as a grid, with some number of other ants scattered randomly across it. The ant of interest — call it the explorer — starts at some cell of the grid and, with equal probability, moves to one of the adjacent cells. Then, with equal probability, it moves to one of the cells adjacent to that one, and so on. In statistics, this is referred to as a “random walk.” The explorer counts the number of other ants inhabiting every cell it visits.

In their paper, the researchers compare the random walk to random sampling, in which cells are selected from the grid at random and the number of ants counted. The accuracy of both approaches improves with each additional sample, but remarkably, the random walk converges on the true population density virtually as quickly as random sampling does.

That’s important because in many practical cases, random sampling isn’t an option. Suppose, for instance, that you want to write an algorithm to analyze an online social network — say, to estimate what fraction of the network self-describes as Republican. There’s no publicly available list of the network’s members; the only way to explore it is to pick an individual member and start tracing connections.

Similarly, in ad hoc networks, a given device knows only the locations of the devices in its immediate vicinity; it doesn’t know the layout of the network as a whole. An algorithm that uses random walks to aggregate information from multiple devices would be much easier to implement than one that has to characterize the network as a whole.