INC in the News

(04/04/2013) President Obama announces BRAIN Initiative


Mapping the Mind

President Obama announces BRAIN Initiative in which UC San Diego, 'Mesa' colleagues and private-public partners will play key roles

April 4th, 2013

By Inga Kiderra


President Barack Obama is introduced by Dr. Francis Collins, Director, National Institutes of Health, at the BRAIN Initiative event in the East Room of the White House, April 2, 2013. (Official White House Photo by Chuck Kennedy) The President of the United States gathered together on April 2 "some of the smartest people in the country, some of the most imaginative and effective researchers in the country," he said, to hear him announce a broad and collaborative research initiative designed to revolutionize our understanding of the brain.

The BRAIN Initiative, short for Brain Research through Advancing Innovative Neurotechnologies, is launching with approximately $100 million in proposed funding in the president's Fiscal Year 2014 budget. It aims to advance the science and technologies needed to map and decipher brain activity.

Sitting in the front row for the announcement were three University of California chancellors, including UC San Diego's Pradeep K. Khosla.

Chancellor Khosla was accompanied at the White House by Ralph Greenspan, associate director of the Kavli Institute for Brain and Mind at UC San Diego (KIBM); Terry Sejnowski of the Salk Institute for Biological Studies and UC San Diego, director of campus's Institute for Neural Computation; KIBM Director Nick Spitzer, distinguished professor of neurobiology in the Division of Biological Sciences, and Dr. Dilip V. Jeste, Estelle and Edgar Levi Chair in Aging, distinguished professor of psychiatry and neurosciences at UC San Diego School of Medicine, and director of the Stein Institute.

"As humans, we can identify galaxies light years away, we can study particles smaller than an atom," President Barack Obama said. "But we still haven't unlocked the mystery of the three pounds of matter that sits between our ears."

The human brain, Obama pointed out, has some 100 billion neurons and trillions of connections between them. Right now, he said, borrowing a musical metaphor from National Institutes of Health Director Francis Collins, we can make out just the string section of the orchestra. The BRAIN Initiative aims to make it possible to hear the entire symphony.

Speaking from D.C., Chancellor Khosla said he was struck by the levels of energy and enthusiasm during the president's speech and in the community afterwards.

"The president's initiative is charting the next frontier of science," Khosla said, "and UC San Diego is poised and ready to help our country lead the way. Neuroscience, biology, and cognitive science are among the premier areas of strength on our campus, and we are really excited to be part of the effort to gain a deep understanding of human beings and how we behave.

"We anticipate our scientists will continue to play key roles in this great endeavor," Khosla said, "Researchers from UC San Diego—in collaboration with colleagues at Salk and others on the Torrey Pines Mesa—will be involved in almost all areas of the BRAIN initiative, from those in the sciences and engineering who will help to draw the brain-activity map to those in social sciences who will help to read the map, figuring out how brain activity translates into cognition."

In addition to research funding support from three federal agencies—the National Institutes of Health, the Defense Advanced Research Projects Agency, and the National Science Foundation—the BRAIN Initiative is also supported by financial commitments from the private sector. These include longtime UC San Diego partners the Salk Institute and the Kavli Foundation.

The Kavli Foundation and the Kavli Institute for Brain and Mind, said KIBM's Greenspan, played important roles in sparking the BRAIN Initiative.

The audacious idea of a comprehensive brain activity map was first discussed at a seminal meeting in September 2011 of 13 neuroscientists and 14 nanoscientists at the Kavli Royal Society International Centre outside of London. Greenspan was one of the leaders to flesh out the idea and draft a white paper. He also credits in particular Miyoung Chun, the Kavli Foundation's vice president of science programs. She was key, he said, first in connecting up with the White House Office of Science and Technology Policy and then keeping "us all on track in developing and expanding the idea over the next year and a half."

He is—along with Chun and colleagues from Berkeley, Caltech, Harvard and Columbia–one of the original six architects of the catalytic proposal published in Neuron in June 2012.

Also speaking from D.C., right after attending the White House announcement, Greenspan– who recalls "falling off his chair" when he heard the president reference brain mapping in his 2013 State of the Union Address and then recalls falling off again when the New York Times' John Markoff broke the story in February, far ahead of the official announcement– said: "I still think it's unbelievable it's come to pass. It's a miracle of the right idea of falling on fertile ground at the right time.

"We're at a crossroads in the history of neuroscience and the history of nanoscience," Greenspan said. "We're at a stage now where a marriage of the two can create the synergy we've dreamed about but so far hasn't been possible."

See President Obama's speach on the BRAIN Initiative here...

Jeff Elman, dean of the Division of Social Sciences at UC San Diego, echoed Greenspan's sentiments. Elman is a co-founder of UC San Diego's department of cognitive science, the first of its kind in the world, and co-director emeritus of the KIBM, launched at UC San Diego in 2004 to support interdisciplinary research ranging from "the brain's physical and biochemical machinery to the experiences and behaviors called the mind."

"Ten years ago, comprehensively mapping human brain activity would have been fanciful. The technology required would have seemed like science fiction," Elman said. "Today, the technology and the goal appear to be within our grasp."

Khosla, Greenspan and Elman all agreed with President Obama's comment during his speech at the White House that this is just a beginning—a very exciting beginning to an effort that will yield many positive consequences.

In his speech, Obama stressed the importance of ideas and innovation to the U.S. economy and reminded the nation that it is critical to invest in basic research. Understanding the brain's complex circuits of neurons, and the behaviors to which these give rise, he said, will eventually lead to treatments for brain disorders, such as Alzheimer's or autism, but it will also result in applications we can't even imagine yet.

He compared BRAIN to the Human Genome Project, which enabled scientists to map the entire human genome and helped create not only jobs but also a whole new era of genomic medicine. And he characterized it as one of his administration's "Grand Challenges for the 21st Century," ambitious but achievable goals like "making solar energy as cheap as coal or making electric vehicles as affordable as the ones that run on gas."

Obama said, "We have a chance to improve the lives of not just millions, but billions of people on this planet through the research that's done in this BRAIN initiative alone. But it's going to require a serious effort, a sustained effort. And it's going to require us as a country to embody and embrace that spirit of discovery that made America – America."

"Let's get to work," he said in closing.


(03/07/2013) Could learning music help children with attention disorders?


Power of Art: Can music help treat children with attention disorders?

March 7th, 2013

by Jane O'Brien
BBC News Washington

See full article and flash video here...

Could learning music help children with attention disorders? New research suggests playing a musical instrument improves the ability to focus attention.


To the musical ear, life has a rhythm comparable to grand opera or simple folk tunes. Our ability to understand that rhythm and synchronise with each other is at the core of every human interaction.

That's why researchers in San Diego believe that learning to play musical instruments can help us focus attention and improve our ability to interact with the world around us.

For more than a year, children at the city's Museum School have been taking part in an experiment involving Gamelan, a percussion style of ensemble music from Indonesia that emphasizes synchronicity.

Sensors attached to the instruments monitor the children's ability to hit the beat precisely. The data is analyzed and a mathematical algorithm is used to determine a base measurement of their accuracy. That measurement is then compared to the results of behavioural and cognitive tests, and assessments by teachers and parents.

"So far, we've found a correlation between their ability to synchronise and their performance on cognitive tests," says Alexander Khalil, head of the Gamelan Project, funded by the National Science Foundation.

"What this could mean, is that learning to time in a group setting with other people musically, could improve your ability to focus attention."

Khalil began the research after several years of noticing that children who lacked the ability to synchronize also struggled to pay attention during other activities. As their musical ability improved, so did their attention.

"It is possible that music practice could become a non-pharmacological intervention for problems such as ADHD (attention deficit-hyperactivity disorder). We haven't tested it yet but it's a possibility - and an exciting possibility," he says.

ADHD is a neurobehavioral disorder that affects one in 10 children in the US. They have problems paying attention, controlling impulsive behaviour and can be overly active. It can't be cured but the symptoms can be managed - often with medication.

It's thought music might help such children because our sense of timing affects so much of our behaviour.

"The ability to time, to synchronise with others underlies all face to face communication," says Khalil. "People imagine that synchronizing is doing something simultaneously. But synchronizing actually means processing time together - perceiving time together in such a way that we have this common understanding of how time is passing."

Music offers many different layers and levels of time, from the milliseconds it takes to gauge a series of beats, to the minutes of a musical phrase or fragment and the hours of a full performance.


A study participant wears headgear that allows researchers to monitor his brain activity while moves to music.


"By learning music, one of the things you learn is rhythm and how to be aware of the temporal dynamic of the world around you and how to keep your attention focused on all of these things while you do what you do."

The Gamelan Project is part of a growing body of research into the effects of music on the brain. New imaging technology is making it possible to discover how different areas of brain function are connected.

"Having these ways to look into the brain gives us a tool that we can then use to study the effects of music on the growth and development of the brain," says Professor John Iversen of the Institute of Neural Computation at the University of California San Diego.

He's heading the Symphony Project, one of the first longitudinal studies of its kind on the effects of musical training on brain development.

"There's always this nature/nurture question - are musicians' brains different because of music, or are the people with that kind of brain the ones that stuck with music because they're good at it?" says Iversen.

"To really understand whether it's music making these brain changes, you have to study someone as they begin to learn music and as they continue learning music. Then you can see how their brain develops and compare that with children not doing music."

It could be five years before any results of the study are known but scientists are already speculating that it could have far-reaching implications for musical training.

"What if we have some kids that are intensively studying music and we find that their brains grow at an accelerated rate?" says Iversen.

"The more you work out, the bigger your muscles get. The brain may work somewhat like that as well. The more you practice the stronger the circuits will become."

Paula Tallal is co-director of the Center for Molecular and Behavioural Neuroscience at Rutgers University. She spent her career studying how children use time to process speech. She says people who have had musical training have been shown to have superior processing skills.

"What we don't know is whether there is something common to musical training that is common to attention, sequencing (processing the order in which something occurred), memory and language skills," she says.

"We know from multiple studies that children who have musical training do better at school. We don't need further research to show that. What we're interested in from a scientific perceptive is why that occurs. What neural mechanisms are being driven by musical experience and how do they interact with other abilities."

She says this research is ever more critical because many schools facing budget shortfalls are cutting music programmes.

"We're creating an impoverishment that nobody understands the long term effects of," she warns.


Media Contact:  Paul K. Mueller, 858.534.8564


(02/05/2013) Research Collaboration Praised by the Army


Research Collaboration Praised by the Army

February 5th, 2013

The success story highlighted in CaN CTA report.


This month, the Army is conducting a review of its basic research activities. Of all the activities across the Army Research Laboratory (ARL)'s directorates, the only success story being highlighted is the effort by Cognition and Neuroergonomics Collaborative Technology Alliance (CaN CTA). The following is an excerpt from the report:

"The past five years have seen an explosion in the research and development of systems that use online brain-signal measurement and processing to enhance human interactions with computing systems, their environments, and even other humans. These neurosciencebased systems, or "neurotechnologies," are poised to dramatically change the way users interact with technology."

"A team of researchers in the Army's CaN CTA recently published a special section, "Neurotechnological Systems: The Brain- Computer Interface," comprised of four manuscripts that appeared in the special 2012 Centennial Celebration issue of the Proceedings of the IEEE — the most highly-cited general interest journal in electrical engineering, electronics, and computer science."

"In this special section, researchers from UCSD, the National Chiao Tung University (Taiwan), and the Army Research Laboratory, collaborated closely to define a vision of the evolution of the measurement capabilities, the analytic approaches, and the potential user applications, for the future of neurotechnologies in the coming decades. The involvement of CaN CTA researchers in this special section gave the Army an opportunity to help shape the future of a critical technology development domain, and demonstrates the recognition of the Army as a leader in this emerging field."



(01/05/2013) Machine Perception Lab Shows Robotic One- Year-Old on Video


Machine Perception Lab Shows Robotic One- Year-Old on Video

January 5th, 2013

The world is getting a long-awaited first glimpse at a new humanoid robot in action mimicking the expressions of a one-year-old child. The robot will be used in studies on sensory-motor and social development – how babies "learn" to control their bodies and to interact with other people.


Diego-san's hardware was developed by leading robot manufacturers: the head by Hanson Robotics, and the body by Japan's Kokoro Co. The project is led by University of California, San Diego full research scientist Javier Movellan.

Movellan directs the Institute for Neural Computation's Machine Perception Laboratory, based in the UCSD division of the California Institute for Telecommunications and Information Technology (Calit2). The Diego-san project is also a joint collaboration with the Early Play and Development Laboratory of professor Dan Messinger at the University of Miami, and with professor Emo Todorov's Movement Control Laboratory at the University of Washington.



Movellan and his colleagues are developing the software that allows Diego-san to learn to control his body and to learn to interact with people.

"We've made good progress developing new algorithms for motor control, and they have been presented at robotics conferences, but generally on the motor-control side, we really appreciate the difficulties faced by the human brain when controlling the human body," said Movellan, reporting even more progress on the socialinteraction side. "We developed machine-learning methods to analyze face-to-face interaction between mothers and infants, to extract the underlying social controller used by infants, and to port it to Diego-san. We then analyzed the resulting interaction between Diego-san and adults." Full details and results of that research are being submitted for publication in a top scientific journal.

While photos and videos of the robot have been presented at scientific conferences in robotics and in infant development, the general public is getting a first peak at Diego-san's expressive face in action. On January 6, David Hanson (of Hanson Robotics) posted a new video on YouTube.

"This robotic baby boy was built with funding from the National Science Foundation and serves cognitive A.I. and human-robot interaction research," wrote Hanson. "With high definition cameras in the eyes, Diego San sees people, gestures, expressions, and uses A.I. modeled on human babies, to learn from people, the way that a baby hypothetically would. The facial expressions are important to establish a relationship, and communicate intuitively to people."

Diego-san is the next step in the development of "emotionally relevant" robotics, building on Hanson's previous work with the Machine Perception Lab, such as the emotionally responsive Albert Einstein head.


“[Diego-san] brings together researchers in developmental psychology, machine learning, neuroscience, computer vision and robotics.”

— Javier Movellan


The video of the oversized android infant was picked up by the popular online technology magazine, Gizmag, with a Jan. 7 article titled "UCSD's robot baby Diego-san appears on video for the first time," written by Jason Falconer.



In his article, Falconer writes that Diego-san is "actually much larger than a standard one year old – mainly because miniaturizing the parts would have been too costly. It stands about 4 feet 3 inches (130cm) tall and weighs 66 pounds (30kg), and its body has a total of 44 pneumatic joints. Its head alone contains about 27 moving parts."

The robot is a product of the "Developing Social Robots" project launched in 2008. As outlined in the proposal, the goal of the project was "to make progress on computational problems that elude the most sophisticated computers and Artificial Intelligence approaches, but that infants solve seamlessly during their first year of life."

For that reason, the robot's sensors and actuators were built to approximate the levels of complexity of human infants, including actuators to replicate dynamics similar to those of human muscles. The technology should allow Diego-san to learn and autonomously develop sensory-motor and communicative skills typical of one-year-old infants.

"Its main goal is to try and understand the development of sensory motor intelligence from a computational point of view," explained principal investigator Movellan in a 2010 Q&A with the Japan-based PlasticPals blog. "It brings together researchers in developmental psychology, machine learning, neuroscience, computer vision and robotics. Basically we are trying to understand the computational problems that a baby's brain faces when learning to move its own body and use it to interact with the physical and social worlds."

The researchers are interested in studying Diegosan's interaction with the physical world via reaching, grasping, etc., and with the social world through pointing, smiling and other gestures or facial expressions.

As outlined in the original proposal to the NSF, the project is "grounded in developmental research with human infants, using motion capture and computer vision technology to characterize the statistics of early physical and social interaction. An important goal is to foster the conceptual shifts needed to rigorously think, explore, and formalize intelligent architectures that learn and develop autonomously by interaction with the physical and social worlds."

According to UCSD's Movellan, the expression recognition technology his team developed for Diego-san has spawned a startup called Machine P

erception Technologies (MPT). The company is currently looking for undergraduate interns and postgraduate programmers. "We like UCSD students because they tend to have a strong background in machine learning."

The project may also open new avenues to the computational study of infant development and potentially offer new clues for the understanding of developmental disorders such as autism and Williams syndrome.

As noted in the Gizmag article, Diego-san won't be the only child-like robot for long. This spring Swiss researchers will demonstrate their nearly 4-foot-tall Roboy robot toddler (with a face selected via a Facebook contest!).


The above story is reprinted from materials provided by UCSD News Center. The original article was written by Doug Ramsey.


(05/24/2012) UC San Diego Receives $7 Million from DOD for Innovative Neural Research


UC San Diego Receives $7 Million from DOD for Innovative Neural Research

By Kim McDonald | May 24, 2012

See article here...

Schematic of cooperative brain centers interactiing to produce functional neural behavior associated with learning and decision making.


An interdisciplinary team of scientists at UC San Diego composed of physicists, biologists, chemists, bioengineers and psychologists has received a five-year, $7 million grant from the U.S. Department of Defense to investigate the dynamic principles of collective brain activity.

The innovative research effort, which is being funded by the Office of Naval Research under the Defense Department's MultiUniversity Research Initiative, or MURI, will also involve scientists at UC Berkeley and the University of Chicago.

The team plans to conduct basic research on how collective action in the brain learns, modulates and produces coherent functional neural activity for coordinated behavior of complex systems.

"This research will tie together theoretical ideas, hardware implementation of structural models and experimental investigations of human and animal behavior to develop a quantitative understanding and a predictive language for discussing complex physical and biological systems," said Henry Abarbanel, a physics professor at UC San Diego who is heading the collaboration.

The grant will pay for the costs of new laboratory facilities at UC San Diego and the University Chicago, create powerful parallel computing capabilities for the three universities involved and employ 10 or more postdoctoral research fellows. Key UC San Diego researchers participating in the effort are Katja Lindenberg, professor of chemistry and biochemistry; Tim Gentner, associate professor of psychology; Gert Cauwenberghs, professor of bioengineering; Misha Rabinovich, research physicist in the BioCircuits Institute; and Terry Sejnowski, professor of biology.

This is the fourth MURI award led by Abarbanel. The first focused on theory and experiment in complex fluid flows and was funded by the Defense Advanced Research and Projects Agency from 1988 to 1993. The second investigated chaotic communications strategies from 1998 to 2003 under sponsorship by the Army Research Office. The third developed advanced chemical sensing methodologies using animal olfactory dynamics and was funded by the Office of Naval Research from 2007 to 2012.


Media Contact
Kim McDonald, 858-534-7572,



(10/10/2011) UC San Diego researchers to receive a $1.9M EFRI grant from NSF during 2011


UC San Diego researchers to receive a $1.9M EFRI grant from NSF during 2011


Media Contacts:
Joshua A. Chamot, NSF (703) 292-7730

Program Contacts:
Sohi Rastegar, NSF (703) 292-8305
Cecile J. Gonzalez, NSF (703) 292-8538
See original article here...


Interdisciplinary teams to explore quietly powerful biological signals and the intersection between minds and machines

September 28, 2011

The National Science Foundation (NSF) Office of Emerging Frontiers in Research and Innovation (EFRI) has announced 14 grants for the 2011 fiscal year, awarding nearly $28 million to 60 investigators at 23 institutions.

During the next four years, teams of researchers will pursue transformative, fundamental research in two emerging areas: technologies that build on understanding of biological signaling and machines that can interact and cooperate with humans.

Results from this research promise to impact human health, the environment, energy, robotics and manufacturing.

Simulating the brain to improve motor control

The project "Distributed Brain Dynamics in Human Motor Control" (1137279) will be led by Gert Cauwenberghs, with colleagues Kenneth Kreutz-Delgado, Scott Makeig, Howard Poizner, and Terrence Sejnowski, all from the University of California at San Diego.

The researchers aim to create an innovative, non-invasive approach for rehabilitation of Parkinson's disease patients. In studies of both healthy individuals and those with the disease, the team will use new wireless sensors and a novel imaging method to monitor and record body and brain activity during real-world tasks. This data will be used to develop detailed, large-scale models of activity in the brain's basal ganglia-cortical networks, where Parkinson's disease takes its toll, with the help of newly developed brain-like hardware. Building on recent advances in control theory, the team will take into account both the perceptual and cognitive factors involved in complex, realistic movements. Ultimately, they will create a system that offers realistic sensory feedback to stimulate beneficial neurological changes.

Summaries of the eight EFRI projects on Engineering New Technologies Based on Multicellular and Inter-kingdom Signaling (MIKS) are found on the award announcement Web page.

Summaries of the six EFRI projects on Mind, Machines, and Motor Control (M3C) are found on the award announcement Web page.




(08/17/2011) UC Research features SCCN's work on Brain Computer Interfacee


UC Research features SCCN's work on Brain Computer Interface


Listening in on the brain
By Erik VanceWednesday 17 August 2011
See original article here...


UC San Diego scientists are developing technology that links thoughts and commands from the brain to computers. In addition to neat gadgets like mind-dialed cell phones, devices to assist the severely disabled and a cap to alert nodding-off air traffic controllers, new technology could reshape medicine.

Last spring, Tzyy-Ping Jung was all over the news. MIT Tech Review, the Huffington Post and a dozen other outlets and blogs were buzzing about his new headband, capable of reading your thoughts and transferring them to a cell phone.

Imagine, a cell phone you could dial with your mind. One outlet called it "the end of dialing"; another said, "The bar for hands-free technology has officially been raised." Jung, however, just sighs and says they missed the point.

"It's a demonstration of a [brain interface] system that could be applied to daily life. It's not really the end goal," says Jung. "Who needs a phone that dials using brain waves if they can actually dial with their hands?"

Jung is associate director at the Swartz Center for Computational Neuroscience at UC San Diego, where researchers lead a new field called Brain Computer Interface, or BCI. The emerging area is littered with impressive toys and dazzling gadgets, like robots that move with a thought and artificial arms that respond at will, almost like real ones.



  Tzyy-Ping (left) and a group at the National Chiao Tung University in Taiwan have developed headgear and software that monitors brainwaves, collects data and transfers a thought process to a mobile device.  

But while high-tech wizardry makes for fun headlines, UC scientists are poised to make a subtler yet fundamental change to the face of medicine. Using a technology somewhat overlooked for more than a decade, scientists are building a two-way conversation between your brain and the many computers that surround it every day.

Scott Makeig works with Jung as the director of the Swartz Center. For more than 20 years he has studied electroencephalogram (EEG) technology. EEGs, recognizable by their funny skullcaps dotted with electrode sensors, measure the electrical signals emitted by a subject's scalp from the brain beneath. While fast and relatively mobile, over the past decade EEG research has been eclipsed by giant fMRI machines, which use huge magnets to track blood movement within the brain. It's a slower, less direct measure of brain activity, but unlike EEG, which mainly focuses on the outer layers of the brain, it can pierce all the way through.

"EEG has dwindled to a low point in its use in medicine after MRI came out," Makeig says. "And it was more or less ignored in neurophysiology."

But hold your pity for poor EEG. In the meantime, scientists have been refining the bulky caps to the point where some take up less room than a pair of headphones. Jung has partnered with his alma mater, National Chiao Tung University in Taiwan, to develop headpieces that collect phenomenal amounts of data in a fraction of a second and broadcast it to a laptop or cell phone. Whereas previous EEG caps required gels to be smeared on a user's scalp, today's sleeker "dry" electrodes are so advanced that several companies have even created brain-operated children's toys.

But the skullcap is just half of the brain-sensing equation; you also need to know what all that data means.

"If someone records data from the scalp they immediately realize how messy it is," Jung says. "It's very noisy."

This is the so-called "cocktail party problem" — EEG brain recordings are like noisy gatherings, where dozens of conversations blend with background noises into confusing slurry. Separating which signals are related to a given thought process is daunting.

In the mid-'90s, Makeig and Jung, plus Terry Sejnowski and Anthony Bell at the Salk Institute, pushed through this problem by teasing apart the EEG signals using a clever analysis borrowed from French theoreticians. Before long, they were able to discriminate specific brain area sources within the crowded and overlapping brainwave and EEG signals coming from working brains.

This, along with a great deal of other work around the world, has opened the way for scientists to now link computers directly to commands from the brain. Although EEGs cannot pierce deep into the brain, the outermost layers — the brain's cortex — generally are where what we call higher reasoning occurs, making it ideal for operating machines.

Naturally, scientists are aiming to build devices to help people with disabilities who are unable to operate wheelchair, computers and phones. But Makeig says brain interfaces have a much broader potential if used the other way — eavesdropping rather than taking commands. For instance, Makeig and Jung have done research into alertness monitoring for the military. He says soon we may be able to give simple headbands to air traffic controllers to alert them when they are nodding off.


Valuable for patient care

William Mobley, a UC San Diego neurologist who has worked on degenerative neurological disorders and Down syndrome, goes even further. He and Jung head up the Center for Advanced Neurological Engineering, which aspires to create a suit that could relay all kinds of information about a patient.

"We envision a time very soon in which a patient's vital signs, EEG, EKG and movements can be recorded 24/7 and sent wirelessly to a remote location for review by a physician," said Mobley. "The suit might well be deployed to allow neurologists a much more complete assessment of patients with a variety of disorders, in the process collecting many thousands of times as much data as is currently the case."

This is not science fiction. The most sophisticated EEG devices (which cover the head with a bulky cap) can parse out underlying brain signals from the admixture of data recorded from up to 256 places on the scalp. However, with today's gadgets you don't need that kind of precision. With just a dozen channels or so Jung and Makeig can easily detect something as simple as a drowsy air traffic controller.


Tuning in on emotions

With more channels, Makeig also can get a pretty good sense of emotion. He says that a simple EEG device could someday become another tool for psychiatrists to give them a clue into the inner world of their patients. To demonstrate the technology, Makeig and graduate student Tim Mullen last year put on an unusual quartet. Makeig was on the violin and two other researchers took the cello and clarinet while Mullen played, well, his brain. (See photo at the top.) He began before the concert, playing musical notes and carefully cultivating the emotions they inspired in his own mind.

"On the night of the performance, I can sit down and reimagine that state — the state that was evoked by a particular note," Mullen says. "And when I imagine that particular emotion my brain dynamics will be recreated again and the machine detects it and it plays that note that originally evoked that emotion in me."

The resulting call and response performance, like the brain dialing, is a stunning demonstration of the underlying potential of EEG-related brain interface. Can we expect a first chair EEG-ist next year at the Metropolitan Opera? No, probably not, but Makeig and Jung say that the important lesson is that scientists can now reliably track specific emotions as well as thoughts.

This, the researchers agree, is how BCI will actually integrate into our lives, as it still lags behind fingers for dialing numbers and surfing the Internet. By using the interface to listen in on the mind, scientists can make tools to reshape medicine, along with the clever toys and fodder for the occasional headline.


(04/28/2011) Press Release


ABC News, KPBS and UCSD-TV Feature UCSD Researcher's Brain Monitoring Technology

April 28th, 2011

by Rhonda McCoy


ABC News, KPBS, and UCSD-TV have all recently featured new brain-computer interface (BCI) technology developed by Dr. Tzyy-Ping Jung and associates Yu-Te Wang and Yijun Wang of the Institute for Neural Computation. This technology represents a unique and fast-advancing generation of mobile, wireless brain activity monitoring systems. An immediately promising application, whose feasibility was first demonstrated by UCSD researchers Jung and Makeig in the 1990's, is to monitor the alertness of workers in a variety of occupations that require around the clock alertness, such as air traffic controllers, drivers and pilots, and nuclear power plant monitors.

Jung and collaborators Chin-Teng Lin, Jin-Chern Chiou, and associates at National Chiao-Tung University in Hinschu, Taiwan have developed a mobile, wireless, and wearable electroencephalographic (EEG) headband system that contains dry scalp sensors that monitor the wearer's brain waves via signals transmitted through a Bluetooth link that can be read by many cell phones and other mobile devices.  The system can continuously monitor the wearer's level of alertness and cue appropriate feedback (for example, audible warning signals or other system alerts) to assist a drowsy worker in maintaining system performance.

Jung, a biomedical engineer, research scientist and Associate Director of the Swartz Center for Computational Neuroscience (SCCN) in the Institute for Neural Computation, UCSD, says that the technology is almost production ready.  "We're trying to translate the technology from laboratory experiments to the real world, step by step."

The same dry electrode technology has also been used to detect brain activity in response to visual stimuli flickering at specific frequencies,  enabling hands-free dialing of a cell phone. Using such a system, a severely handicapped person could summon emergency aid simply by focusing on the numbers of a keypad.  This and similar "smart" prosthetics that respond to direct brain-signal commands may soon offer many new opportunities to disabled persons.

As former UCSD Vice Chancellor for Research Art Ellis stated, "Universities are finding that interdisciplinary research and international teamwork significantly increase our ability to translate the discoveries in our laboratories into results that benefit society."

The Swartz Center for Neural Computation, directed by Scott Makeig, was founded in 2001 by a generous gift from founding donor Dr. Jerome Swartz of The Swartz Foundation (Old Field, New York). The center is currently also funded by grants from the Office of Naval Research, the Army Research Laboratory, The Army Research Office, DARPA, and the National Institutes of Health. Dr. Jung's research is also supported in part  by a gift from Abraxis Bioscience Inc.


Media Contact:  Paul K. Mueller, 858.534.8564

(04/14/2011) abcNews features SCCN's work

See article by KI MAE HEUSSNER here...


Air Traffic Controllers: Brain Monitoring to Keep Them Awake?

Brain-Computer Interfaces Can Dial Phones With Thoughts, Detect Fatigue as Well

A new class of brain-computer interface technology could not only let you control devices and play games with your thoughts, but also help detect fatigue in air traffic controllers and other workers in high-stakes positions.

Researchers at the Swartz Center for Computational Neuroscience at the University of California, San Diego, have made it possible to place a cellphone call by just thinking about the number. They say the technology could also tell whether a person is actively thinking, or nodding off.

Tzzy-Ping Jung, a neuroscience researcher and associate director of the center, said the system uses brainwave sensors (or Electroencephalogram (EEG) electrodes) attached to a headband to measure a person's brain activity. The brain signals are then transferred to a cellphone through a Bluetooth device connected to the headband.


Applications Could Provide Hands-Free Dialing, Help for People with Disabilities

In the lab, he said, test subjects sit in front of a screen displaying 10 digits, each flashing at a different rate. The number 1, for example, may flash nine times per second, while the number 2 flashes at a slightly higher frequency.

As participants view each number, the corresponding frequency is reflected in the visual cortex in their brains, he said. That activity is picked up by the sensors, relayed through the wireless Bluetooth device and then used to dial numbers on the cell phone.

Assuming all goes according to plan, if you place the headband on your head, sit at the screen, and then view the digits 1-2-0-2-4-5-6-1-4-1-4, your thoughts alone should lead you to the White House switchboard.

Jung said that results vary from person to person, but many people can reach 90 or even 100 percent accuracy.

"Probably I was the worst subject. I think I reached 85 percent," he said.

For now, the technology is just in the developmental phase. But Jung, who has been studying neurological engineering since 1993, said, "We're trying to move from the lab to the real world, step by step."

In time, applications could potentially give consumers a hands-free way to use their cell phones or people with disabilities a new way to interact with the world. But, Jung said, more passive uses of the technology could already be used to detect fatigue or lapses in attention in people who work in fields where concentration is essential.


Brain-Computer Tech Could Alert Workers When Attention Drops

"In the past, all these brain-computer interfaces have targeted a very small fraction of the patient population," he said. "But [people in] the general, healthy population actually suffer, from time to time, from mental fatigue. …Attention deficit can lead to catastrophic consequences."

Those consequences have been especially visible in recent months, as air traffic controllers have been found sleeping on the job at airports across the country. This week, an FAA official resigned the day after reports of yet another drowsy air traffic controller.

Jung said the same brainwave sensors that enable thought-controlled dialing could be used for cognitive monitoring.

Air traffic controllers, truck drivers, members of the military and anyone else whose lapse in concentration could put lives at risk could strap on a headband (or helmet) and be alerted when their brain activity indicates a drop in attention or alertness. They might hear a warning signal, or get a tactile alert, Jung said.


Technology More Ready Than Consumers

But he said that while the technology is almost ready, people might not be ready to accept it.

"One of the difficulties is people don't want to be watched," he said. "It's sort of like Big Brother watching you all the time."

He also said that he and his team are continuing to refine their technology to tease apart various internal and external factors, like a person's medication or outside power lines, that can generate electronic "noise" and make it more difficult to discern important signals.

Still, given the positive implications, he said, major organizations are interested in the research. His university has contracts with the Army, Navy and DARPA to study how brain-computer interfaces could help soldiers, he said.

And Jung and his team are not the only ones interested in blending the worlds of computing and neuroscience.

NeuroSky, a San Jose, Calif.-based company, already sells a wireless EEG headset that it says can be used for education and gaming.

The MindWave headset measures brainwave impulses from a person's forehead and can be used to gauge student attention levels during lessons, monitor daily mediation and play games that depend on a user's emotional control.

Tansy Brook, the head of communications for the company, said applications for people who work in hazardous work environments, such as air traffic controllers or construction workers, could be realized in the next five years.

"There's a general awareness you want people to have in those situations, they need to be paying attention every single second," she said. "There is amazing potential."

(04/14/2011) KPBS radio documents Tzyy-Ping Jung's (SCCN) research on brain-computer interface technology.

See actual article by BY PEGGY PICO here...


KPBS: New wireless technology uses brain waves to dial up a friend on a cell phone.

Above: A student tests a new brain-wave cell phone app.
Credit: UCSD Photo
Listen to the audio of the interview...

In very simple terms, it works like this:

First, the user puts on a wireless headband or hat embedded with electrodes that read brain activity.

Next, the caller looks at a series of numbers that flicker at different rates on a computer screen. When focused upon, each number causes a slightly different brain wave pattern

The cell phone decodes the brain waves associated with those numbers and places the call.

Neuroscience researcher Tzyy-Ping Jung, Ph.D. and his colleagues at the Swartz Center for Computational Neuroscience at the University of San Diego developed the system.

"It can bypass conventional motor output path and provide a direct path ofcommunication from the brain to an external device," said Jung.

The cell program is a type of Brain-Computer Interface (BCI) system which is a rapidly expanding scientific field where researchers are finding ways to use thought patterns to command computers and mechanical devices like artificial limbs.

The cell-phone technology could be beneficial to quadriplegics, or those with other severe physical disabilities.

Jung said because the cell-phone based BCI use dry electrodes, miniature electronic circuits and wireless telemetry, it is easier and more comfortable to use than most BCI systems.

“In less than a minute you’re connected and you can do a lot, like experiments, or you can control things, or do video games with just your brain activity,” explained Jung.

In various trial groups, the cell-phone users were about 95 percent accurate in dialing a 10-digit phone number.

Jung said the cell-phone application could be on the market within the next few years.


Additional coverage:

1. Technology Review

2. Asian American

3. Huffington Post

4. ABCNews

(02/20/2011) INC is featured in the latest UCSD at 50 TV show

The UCSD TV series on UCSD's 50th Anniversary year put together an episode on brain-computer interface research at SCCN and INC.

View the video here...

(02/08/2011) INC Co-Director Terrence Sejnowski elected to National Academy of Engineering

INC Co-Director and Salk Institute professor Terrence J. Sejnowski, Ph.D., has been elected to the National Academy of Engineering. This places him in a remarkably elite group of only ten living scientists to have been elected to the National Academy of Sciences, Institute of Medicine as well as the National Academy of Engineering. UCSD and INC congratulate Dr. Sejnowski on this prestigious appontment and exceptional achievement.

(01/11/2011) INC research to be featured on the cover ofNeuro Image

Lost in thoughts: Neural markers of low alertness during mind wandering.

The February issue of Neuro Image: A Journal of Brain Function will feature an article by INC researcher Arnaud Delorme and his student Claire Braboszcz.


(01/04/2011) IEEE names Cauwenberghs Editor-in-Chief and Conference Chair

Gert Cauwenberghs takes on leadership roles in biomedical circuits and systems for IEEE

Gert Cauwenberghs, Co-Director of INC, takes on multiple leadership roles for IEEE in 2011. Gert is the newly named Editor-in-Chief of IEEE Transactions on Biomedical Circuits and Systems. In addition to his role in primary publications in the field, he will Chair two upcoming conferences, General Chair of the IEEE Biomedical Circuits and Systems Conference in San Diego in 2011, and Technical Chair of the IEEE Engineering in Medicine and Biology Conference in 2012.

(01/04/2011) Cauwenberghs and Lee : IEEE Fellows in 2011

Gert Cauwenberghs and Te-Won Lee selected as IEEE Fellows for 2011

INC Co-Director Gert Cauwenberghs and affiliated researcher Te-Won Lee have been selected by the Institute of Electrical and Electronics Engineers (IEEE) as fellows for 2011. Te-Won has perfected independent component analysis algorithms and systems for auditory scene analysis and acoustic source separation, for hands-free telecommunication in cars and mobile environments.

Gert's development of biosensors with student Mike Chi is being recognized by IEEE as well as being featured in MIT Technology Reviews. The biosensor allows longterm monitoring with greater ease of use and increased comfort. The low cost sensor can be mass produced and used outside the hospital environment greatly expanding the potential for conditions which may not manifest in the time period of normal hospital observations. The capacitave sensor is particularly distinctive for making the use of existing technology cost effective through the use of widelt available components and novel circuitry.

Read the MIT Technology Review

(12/09/2010) NSF awards grant to Javier Movellan

NSF funds "An International Social Network for Early Childhood Education"

INC researcher Javier Movellan has been awarded a grant of $749,998 by the National Science Foundation to support development of RubiNet, a social network for early childhood education. The project will develop resources for early childhood education at national and international levels , bringing children, teachers, parents, and researchers together. A unique feature of the project is the use of low-cost, sociable robots as network interfaces. In addition to supporting education and data gathering, the robots will allow children to exchange objects across international boundaries using the robots as intermediaries. This significant difference from other computer interfaces will also allow children in the United States to look around a classroom in Japan, find their friends, and initiate a hug using the robot's child-safe arms.

(12/07/2010) PNAS Interviews Terrence Sejnowski

Dr. Sejnowski discusses TDLC with PNAS.

Read the interview here.

(11/30/2010) INC's CANE group project approved by the NSC

i-RICE brings Taiwanese scholars to INC

In collaobration with the Institute for Engineering Medicine, INC will host students and post docs to to work wth INC and IEM researcher. Txxy-Ping Jung, of the Center for Advanced Neural Engineering (CANE) participated in the development of the proposal recently approved by Taiwan's National Science council. The students and researchers visiting from Taiwan will be participation in an International Center in Advanced Bioengineering Research.

(10/26/2010) Wired Gadget Lab interviews MPLab's Javier Movellan

Gallery: Let Your Children Play With Robots
By Tim Carmody October 26, 2010 | Categories: R&D and Inventions

See article here...

See local copy here...

Machine Perception Laboratory


(08/16/2010) Former INC post doctoral student Inna Fishman talks to Psychology Today.

Salk neuropsychologist Inna Fishman explains some of her current work to Psychology Today.

The Brain's Language Processing in Williams Syndrome and Autism

See article here:


(08/05/2010) Howard Poizner and Team awarded $4.5M ONR MURI grant

Howard Poizner (PI, UCSD), and co-PI's Gary Lunch (UC Irvine) and Terry Sejnowski (Salk and UCSD), together with team leaders Hal Pashler, Sergei Gepshtein, Deborah Harrington, Tom Liu, Eric Hlagren, and Ralph Greenspan were recently awarded a $4.5M ONR MURI grant, with a $3M option period, to study the brain bases of unsupervised learning and training. (October 1, 2009)


The study, “How Unsupervised Learning Impacts Training: From Brain to Behavior”, involves the following:


Principal Investigator: Howard Poizner

Co-PI’s: Gary Lunch (UC Irvine) and Terry Sejnowski (Salk and UCSD)

Agency: ONR (Office of Naval Research)

Funding: $4.5M (3yr base period) [started Oct 1, 2009]; $3.0M (2yr option period); $7.5M (5 year total period)


The goal of this multidisciplinary grant is to examine the neurobiological, genetic, brain dynamic, and neural circuit correlates of unsupervised learning and training. The proposed studies utilize the new capabilities for creating 3D immersive environments and simultaneous EEG-fMRI recordings recently established through ONR-DURIP grant # N000140811114 (H. Poizner, PI).


Additional Information
The cerebral cortex is able to create rich representations of the world that are much more than just reinforcement learning and reflexes. Learning is often self-supervised without feedback, a type of learning referred to as unsupervised learning. Such learning, and memory, is (i) commonplace in naturalistic settings, (ii) critical to humans, (iii) encoded by LTP-type mechanisms, and (iv) of direct relevance to computational theories of learning. Using unsupervised learning, an individual builds up internal hierarchical structures and categorizations that model the statistical properties of the environment. These internal representations can be used flexibly and powerfully to acquire new information thereby creating situational awareness and readiness to act in novel as well as in familiar environments. Yet, unsupervised learning and its neurobiological mechanisms are poorly understood. Our proposed projects will provide new understanding of the neurobiological, genetic, brain dynamic, and neural circuit correlates of this potentially powerful form of learning and training. We propose seven tasks that attack different aspects of the problem making use of parallel paradigms in rodents, flies, and humans. Task 1 maps memory during spatial learning in rats, seeking to uncover the neural engram of memory. Task 2 uses computational modeling to illuminate cortical processes of unsupervised learning in humans. Task 3 conducts studies of training, contrasting the rate and efficiency of both unsupervised and supervised learning. Task 4 explores the brain dynamics of unsupervised learning, using motion capture and virtual environments while recording cortical EEG. Tasks 5 and 6 investigate neuroimaging and genetic correlates of unsupervised learning bringing to bear the new methodology of simultaneous EEG-fMRI recording and using intracranial recordings. Finally, Task 7 exploits the genetic cellular, and behavioral homologies of the fruit fly with humans to study the dopaminergic and genetic regulation of inter-regional coherence associated with learning.

These studies should provide insight into design of the best training environments for our modern military, and increase our understanding of the underlying neurobiological, genetic, brain dynamic, and neural circuit correlates of those environments. Moreover, the studies will open the way to asking if memory enhancing drugs such as ampakines or if particular learning regimens (e.g., extensive experience with diverse environments, short vs. long sessions) change the number and/or distribution of learning-related synaptic modifications and/or the nature of the neural networks and brain dynamics that underlie unsupervised learning. This issue is fundamental to development of mechanism-based strategies for improving learning and performance in complex environments. Finally, the genetic studies will pave the way for development of individualized training techniques that optimize learning environments.


The Institute for Neural Computation (INC) and Institute of Engineering in Medicine (IEM) have jointly launched the UC San Diego Center for Advanced Neurological Engineering (CANE), under Co-Directors William Mobley, Professor and Chair of Neuroscience and Tzyy-Ping Jung, INC Senior Research Scientist and Co-Director of the Swartz Center for Computational Neuroscience (SCCN). The mission: To successfully apply the area’s abundant neural engineering and computation expertise toward translation of basic knowledge and capabilities into solid improvements in the diagnosis, treatment, and prevention of neurological diseases and pathology.

Powering CANE will be the synergism unleashed by bringing together scientists, engineers, and clinicians in the UCSD Health Sciences, Jacobs School of Engineering, Division of Biological Sciences, and other Units in UCSD, as well as the Salk Institute, other neighboring research institutes, and industrial partners. These scientists already have a strong track record of interdisciplinary collaboration in neuroscience, engineering, computation, and clinical translation, and CANE will encourage further research and development collaborations.

The Center will develop and utilize a wide spectrum of innovative methods in brain and body imaging and will apply powerful mathematical and data mining approaches to the resultant information -- a combination to pave the way for translating advances in neuroscience into enhancements in health care environments, whether clinical, workplace or home-based. 

Of importance as well will be CANE’s training of next-generation scientists, engineers, and physicians. Early-stage researchers will get assistance in entry into the research environment and affiliated laboratories will get help in recruiting researchers.

More CANE info ...


(07/10/2010) SCCN will play a leading role in a five-year, $25-million ARL project

A research team of neuroscientists, cognitive scientists and engineers at the University of California, San Diego will play a leading role in a five-year, $25-million Army Research Laboratory (ARL) project to better understand human-systems interactions.



See article here:


(06/29/2010) MPLab's RUBI on NY Times

"Students, Meet Your New Teacher, Mr. Robot"

See article here:


(06/10/2010) Discovery Channel films Howard Poizner and Gary Lynch.

‘Curiosity:The Questions of Life,’

The Discovery Channel filmed Howard Poizner and Gary Lynch in Dr. Poizner’s lab for a series called ‘Curiosity:The Questions of Life,’ on research of the future. The segment is on memory and will air in 2011. Shown on the left is series producer Kyle McCabe, and on the right, series host Dr. Dan Riskin, who is wearing Dr. Poizner’s motion capture body suit, EEG cap and virtual reality head-mounted display. Dr. Riskin explored a large-scale virtual environment while his body movements and cortical EEG were simultaneously recorded and his memory monitored.



(05/25/2010) UC San Diego Entrepreneurship Challenge

Yu Mike Chi, graduate student in the Cauwenberghs laboratory in the Department of Bioengineering and the Institute for Neural Computation, led a team of eight students in the UC San Diego Jacobs School of Engineering, UC San Diego Rady School of Business, and the Salk Institute, to win the top prize in the UC San Diego $80K Entrepreneurship Challenge.

See article here:


(05/01/2010) 2010 Cognitive Neuroscience Annual Spring Retreat

This retreat is sponsored by the NIH Cognitive Neuroscience Training Program of the Institute for Neural Computation and follows in the tradition of the retreats sponsored by the McDonnell-Pew Center for Cognitive Neuroscience

See details here:


(04/27/2010) INC Director, Terrence Sejnowski elected to National Academy of Sciences

La Jolla, CA - Salk Institute professor Terrence J. Sejnowski, Ph.D., whose work on neural networks helped spark the neural networks revolution in computing in the 1980s, has been elected a member of the National Academy of Sciences. The Academy made the announcement today during its 147th annual meeting in Washington, DC. Election to the Academy recognizes distinguished and continuing achievements in original research, and is considered one of the highest honors accorded a U.S. scientist.

See details here:


(03/20/2010) INC Fellowship Opportunities

Promoting multi-level approaches to the neural bases of cognition.

See details here:


(03/04/2010) INC/SCCN Open House

(09/01/2009) INC Newsletter Spring 2009

A brief publication listing news and current INC events.

Link to pdf file here: