MIT Debuts New Brain-Based Chip

The Chip!A recent article on the BBC (and the highly recommended MIT news) breaks the news on an innovative silicone chip that models neuronal architecture and neuronal communication.  The chip’s 400 transistors mimic the head of a neuron: they summate the analog signals received from other chips.  When such signals reach an adjustable limit, they cascade into an action potential, just as in neurons.  Depending on their arrangement and organization, these action potentials can have an excitatory, or inhibitory effect on their neighbours, analogous to their biological counterparts.

This kind of modelling is exciting and interesting – for it is profoundly different from other contemporary methods of modelling brain activity.  While a great overgeneralization, most other programmes model the brain’s circuitry – the neurons, the synaptic connections, the action potentials – in a virtual space.  They exist as computer code, or interacting objects created by such code.  These coded objects, whatever existence they have, model the function of neurons.  These chips, in comparison, are an actual model of a neuron.  And this is the important difference between the two paradigms: that between modelling function and form.

Continue reading “MIT Debuts New Brain-Based Chip”

Advertisements

Astronomers find ET habitability, but only for the biological.

Gliese 581The Search for Extra Terrestrial Intelligence, or SETI, has made a business of looking for signs of intelligence in the universe.  Recent data from a team of astronomers at UC Santa Cruz and the Carnegie Institute for Science have given SETI a promising place to focus their attention: Gliese-581 g, a planet 20 light-years away, in the ‘habitable zone’ around the red dwarf star Gliese-581.  Many factors determine whether a planet is habitable or not, ranging from the obvious variables, such as distance to the star and the star’s luminosity, to the less obvious variables, such as whether or not the planet has a large enough moon to keep its rotation stable or a giant neighbor (such as Jupiter) to sweep away dangerous incoming asteroids.

This discovery, made with the help of the new Kepler spacecraft, suggests that Gliese-581g may have the right conditions for liquid water, considered by many exobiologists (or astrobiologists: those who theorize about extra-terrestrial life) to be essential for life.

However, some have argued that since it’s not life but intelligence that we’re really after, the habitable zone may be the wrong place to look. Continue reading “Astronomers find ET habitability, but only for the biological.”

Caring Robots

90px-2008_Taipei_IT_Month_Day2_Taipei_City_Government_Intelligent_Housekeeping_RobotBack in 1966 Joseph Weizenbaum created “ELIZA”, a relatively simple computer program which was meant to simulate a psychotherapist. The program worked largely by rephrasing a patient’s statements as questions which were then posed back to the patient. Many subjects reported preferring ELIZA to their human therapists, and some continued to value ELIZA’s therapy even after Wiezenbaum revealed ELIZA’s workings. (You can read a transcript of ELIZA in action here.)

Things have moved on somewhat since ELIZA’s day. Maja Matarić, a Professor of Computer Science at the University of Southern California, has developed Robots that can provide advice and therapy to patients who have suffered strokes, or who suffer from Alzheimer’s. The Robot can monitor the patient’s movement as they perform a regime of physical therapy, using a combination of laser scanners and cameras, and provide encouragement and advice. But even more impressively, the robot can monitor how introverted or extroverted the patient is, and tailor the tone of their advice giving accordingly. One stroke patient reported much preferring the robot’s advice and encouragement to that of her husband . . .

Continue reading “Caring Robots”

Time to say ‘sorry’?

Turing_PlaqueIt is unusual for a philosopher to be the subject of headline news. However, in recent days the media has widely covered a high-profile campaign seeking an apology to Alan Turing from the British Government .

Of course, Turing was not just a philosopher: in academic terms, he was primarily a computer scientist, although he is perhaps most widely known outside of academia for his work at the code-breaking institute Bletchley Park during World War II, where he was a major contributor to breaking the Nazi ‘Enigma’ codes. Nevertheless, his contribution to the study of artificial intelligence provoked much debate in philosophy by way of his Turing Test. Continue reading “Time to say ‘sorry’?”

It’s not easy being evil

Lucifer sitting on a rock

Scientific American covers cognitive scientist Selmer Bringsjord’s efforts to program a thoroughly evil artificial intelligence.  As presented in the article, Bringsjord’s working definition of evil seems pretty confused.

To be truly evil, someone must have sought to do harm by planning to commit some morally wrong action with no prompting from others (whether this person successfully executes his or her plan is beside the point). The evil person must have tried to carry out this plan with the hope of “causing considerable harm to others,” Bringsjord says. Finally, “and most importantly,” he adds, if this evil person were willing to analyze his or her reasons for wanting to commit this morally wrong action, these reasons would either prove to be incoherent, or they would reveal that the evil person knew he or she was doing something wrong and regarded the harm caused as a good thing.

Parts of that paragraph read as describing a sadist, a psychopath, or someone who is badly confused. None of these things seem like a good stand-in for evil. But, then, evil is a notoriously difficult idea to define.

I wonder if this general approach– skip the rigorous definition, instead try to recreate the behavior– might appeal to experimental philosophers. Is there anything to be gained from trying to model confusing psychological phenomena like weakness of the will or self-deception? If we could program a computer to behave as if it were deceiving itself, could that possibly give us any insight into what’s going on when we deceive ourselves?

Related articles:

£1.99 - small Neuroethics: Ethics and the Sciences of the Mind
By Neil Levy, University of Melbourne (December 2008)
Philosophy Compass