Where are all our memories stored?
Where are all our memories stored?

Video: Where are all our memories stored?

Video: Where are all our memories stored?
Video: Reprogram Your Subconscious Mind | Dr. Joe Dispenza 2024, April
Anonim

Your brain doesn't process information, extract knowledge, or store memories. In short, your brain is not a computer. American psychologist Robert Epstein explains why the concept of the brain as a machine is ineffective for the development of science, nor for understanding human nature.

Despite their best efforts, neuroscientists and cognitive psychologists will never find copies of Beethoven's Fifth Symphony, words, pictures, grammar rules or any other external signals in the brain. Of course, the human brain is not entirely empty. But it does not contain most of the things that people think it contains - even things as simple as "memories."

Our misconceptions about the brain are deeply rooted in history, but the invention of computers in the 1940s especially confused us. For half a century, psychologists, linguists, neurophysiologists, and other experts on human behavior have argued that the human brain works like a computer.

To get a sense of how frivolous this idea is, consider the brain of babies. A healthy newborn has more than ten reflexes. He turns his head in the direction where his cheek is scratched and sucks in whatever goes into his mouth. He holds his breath when immersed in water. He grabs things so tightly that he can almost support his own weight. But perhaps most importantly, newborns have powerful learning mechanisms that allow them to change quickly so that they can interact more effectively with the world around them.

Feelings, reflexes and learning mechanisms are what we have from the very beginning, and, if you think about it, this is quite a lot. If we lacked any of these abilities, it would probably be difficult for us to survive.

But this is what we are not in since birth: information, data, rules, knowledge, vocabulary, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols and buffers - elements that enable digital computers behave somewhat intelligently. Not only are these things not in us from birth, they do not develop in us during our lifetime.

We do not store words or rules that tell us how to use them. We do not create images of visual impulses, we do not store them in a short-term memory buffer, and we do not then transfer the images to a long-term memory device. We do not retrieve information, images or words from the memory registry. All this is done by computers, but not by living beings.

Computers literally process information - numbers, words, formulas, images. First, the information must be translated into a format that a computer can recognize, that is, into sets of ones and zeros ("bits"), assembled into small blocks ("bytes").

Computers move these sets from place to place in different areas of physical memory, implemented as electronic components. Sometimes they copy the sets, and sometimes they transform them in various ways - say, when you correct mistakes in a manuscript or retouch a photograph. The rules that a computer follows when moving, copying or working with an array of information are also stored inside the computer. The set of rules is called a "program" or "algorithm". A collection of algorithms working together that we use for different purposes (for example, to buy stocks or online dating) is called an "application."

These are known facts, but they need to be spoken out to make it clear: computers operate on a symbolic representation of the world. They really store and retrieve. They are really processing. They do have physical memory. They are indeed governed by algorithms in everything without exception.

At the same time, people do nothing of the kind. So why are so many scientists talking about our mental performance as if we were computers?

In 2015, artificial intelligence expert George Zarkadakis released In Our Image, in which he describes six different concepts that humans have used over the past two thousand years to describe how human intelligence works.

In the earliest version of the Bible, humans were created from clay or mud, which an intelligent God then impregnated with his spirit. This spirit also "describes" our mind - at least from a grammatical point of view.

The invention of hydraulics in the 3rd century BC brought about the popularity of the hydraulic concept of human consciousness. The idea was that the flow of various fluids in the body - "bodily fluids" - accounted for both physical and spiritual functions. The hydraulic concept has existed for over 1600 years, making it difficult for medicine to develop.

By the 16th century, devices powered by springs and gears appeared, which inspired Rene Descartes to think that man is a complex mechanism. In the 17th century, the British philosopher Thomas Hobbes suggested that thinking occurs through small mechanical movements in the brain. By the beginning of the 18th century, discoveries in the field of electricity and chemistry led to the emergence of a new theory of human thinking, again of a more metaphorical nature. In the mid-19th century, German physicist Hermann von Helmholtz, inspired by the latest advances in communications, compared the brain to the telegraph.

Mathematician John von Neumann stated that the function of the human nervous system is "digital in the absence of evidence to the contrary," drawing parallels between the components of computer machines of the time and parts of the human brain.

Each concept reflects the most advanced ideas of the era that gave birth to it. As you might expect, just a few years after the birth of computer technology in the 1940s, it was argued that the brain works like a computer: the brain itself played the role of physical medium, and our thoughts acted as software.

This view was cultivated in the 1958 book Computer and the Brain, in which the mathematician John von Neumann emphatically stated that the function of the human nervous system is "digital in the absence of evidence to the contrary." Although he admitted that very little is known about the role of the brain in the work of intelligence and memory, the scientist drew parallels between the components of computer machines of that time and parts of the human brain.

With subsequent advances in computer technology and brain research, an ambitious interdisciplinary study of human consciousness has gradually developed, based on the idea that humans, like computers, are information processors. This work currently includes thousands of studies, receives billions of dollars in funding, and is the subject of many papers. Ray Kurzweil's book How to Create a Mind: Uncovering the Mystery of Human Thinking, released in 2013, illustrates this point, describing the brain's “algorithms”, methods for “processing information,” and even how it looks like an integrated circuit in its structure.

The concept of human thinking as an information processing device (OI) currently dominates in human consciousness both among ordinary people and among scientists. But this, in the end, is just another metaphor, fiction, which we pass off as reality, to explain what we really do not understand.

The imperfect logic of the OI concept is fairly easy to articulate. It is based on a faulty syllogism with two reasonable assumptions and a wrong conclusion. Reasonable Assumption # 1: All computers are capable of intelligent behavior. Sound Assumption # 2: All computers are information processors. Incorrect conclusion: all objects capable of behaving intelligently are information processors.

If we forget about the formalities, then the idea that people should be information processors just because computers are information processors is complete nonsense, and when the concept of OI is finally abandoned, historians will certainly be considered from the same point of view as now. the hydraulic and mechanical concepts look like bullshit to us.

Try an experiment: draw a hundred-ruble bill from memory, and then take it out of your wallet and copy it. Do you see the difference?

A drawing made in the absence of the original is sure to be terrible in comparison to a drawing made from life. Although, in fact, you have seen this bill more than one thousand times.

What is the problem? Shouldn't the "image" of the banknote be "stored" in the "memory register" of our brain? Why can't we just “turn” to this “image” and portray it on paper?

Obviously not, and thousands of years of research will not allow determining the location of the image of this bill in the human brain simply because it is not there.

The idea, promoted by some scientists, that individual memories are somehow stored in special neurons, is absurd. Among other things, this theory brings the question of the structure of memory to an even more insoluble level: how and where, then, is memory stored in cells?

The very idea that memories are stored in separate neurons is absurd: how and where can information be stored in a cell?

We will never have to worry about the human mind spinning out of control in cyberspace, and we will never be able to achieve immortality by downloading the soul to another medium.

One of the predictions that futurist Ray Kurzweil, physicist Stephen Hawking and many others have expressed in one form or another, is that if a person's consciousness is like a program, then technologies should soon appear that will allow downloading it to a computer, thereby multiplying intellectual ability and making immortality possible. This idea formed the basis of the plot of the dystopian film "Supremacy" (2014), in which Johnny Depp played a scientist like Kurzweil. He uploaded his mind to the Internet, which caused devastating consequences for humanity.

Fortunately, the concept of OI has nothing to do with reality, so we don't have to worry about the human mind getting out of control in cyberspace, and, sadly, we will never be able to achieve immortality by downloading the soul. to another medium. It's not just the absence of some software in the brain, the problem is even deeper here - let's call it the problem of uniqueness, and it delights and depressing at the same time.

Since our brain has neither "memory devices" nor "images" of external stimuli, and in the course of life the brain changes under the influence of external conditions, there is no reason to believe that any two people in the world react to the same impact in the same way. If you and I attend the same concert, the changes that occur in your brain after listening will be different from the changes that occur in my brain. These changes depend on the unique structure of nerve cells, which was formed during the entire previous life.

That is why, as Frederick Bartlett wrote in his 1932 book Memory, two people who hear the same story will not be able to retell it in exactly the same way, and over time, their versions of the story will become less and less alike.

In my opinion, this is very inspiring, because it means that each of us is truly unique, not only in the set of genes, but also in how our brains change over time. However, it is also depressing, because it makes the already difficult work of neuroscientists practically insoluble. Each change can affect thousands, millions of neurons or the entire brain, and the nature of these changes in each case is also unique.

Worse, even if we could record the state of each of the 86 billion neurons in the brain and simulate it all on a computer, this huge model would be useless outside the body that owns the brain. This is perhaps the most annoying misconception about human structure, to which we owe the erroneous concept of OI.

Computers store exact copies of the data. They can remain unchanged for a long time even when the power is turned off, while the brain maintains our intelligence only as long as it remains alive. There is no switch. Either the brain will work without stopping, or we will be gone. Moreover, as neuroscientist Stephen Rose pointed out in The Future of the Brain in 2005, a copy of the current state of the brain can be useless without knowing the complete biography of its owner, even including the social context in which the person grew up.

In the meantime, huge amounts of money are being spent on brain research based on false ideas and promises that will not be fulfilled. Thus, the European Union launched a human brain research project worth $ 1.3 billion. The European authorities believed Henry Markram's tempting promises to create by 2023 a functioning brain simulator based on a supercomputer, which would radically change the approach to the treatment of Alzheimer's disease and other ailments, and provided the project with almost unlimited funding. Less than two years after launching the project, it turned out to be a failure, and Markram was asked to resign.

People are living organisms, not computers. Accept this. We need to continue the hard work of understanding ourselves, but not waste time on unnecessary intellectual baggage. For half a century of existence, the concept of OI has provided us with only a few useful discoveries. It's time to click on the Delete button.

Recommended: