Information

How much of brain power consumption is for information

How much of brain power consumption is for information


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

As previously answered on this site, the brain uses 20W of power. However, how much of this power consumption is for information processing and how much of it is for maintenance of biological conditions for information processing, such as temperature regulation?


According to "Tightly coupled brain activity and cerebral ATP metabolic rate" which is summarised in the Scientific American article "Why Does the Brain Need So Much Power?", conscious computation accounts for 50% of the brain's power consumption. From the Scientific American article:

Chen and his colleagues used MRS specifically to track the rate of adenosine triphosphate (ATP) production, the primary source of cellular energy, in rat brains. MRS employs a magnetic resonance imaging (MRI) machine programmed to pick up particular elements in the body-in this case, the three phosphorus atoms in each ATP molecule…

The team noted that when the lab rats were knocked out, they produced 50 percent fewer ATP molecules than when they were mildly anesthetized.The ATP produced when the brain is inactive, says Chen, seems to go mostly toward cell maintenance, whereas the additional ATP found in the more alert animals fueled other brain functions. He speculates that only a third of the ATP produced in fully awake brains is used for housekeeping functions, leaving the rest for other activities.


Contents

A likely origin for the "ten percent myth" is the reserve energy theories of Harvard psychologists William James and Boris Sidis who, in the 1890s, tested the theory in the accelerated raising of child prodigy William Sidis. Thereafter, James told lecture audiences that people only meet a fraction of their full mental potential, which is considered a plausible claim. [5] The concept gained currency by circulating within the self-help movement of the 1920s for example, the book Mind Myths: Exploring Popular Assumptions About the Mind and Brain includes a chapter on the ten percent myth that shows a self-help advertisement from the 1929 World Almanac with the line "There is NO LIMIT to what the human brain can accomplish. Scientists and psychologists tell us we use only about TEN PERCENT of our brain power." [6] This became a particular "pet idea" [7] of science fiction writer and editor John W. Campbell, who wrote in a 1932 short story that "no man in all history ever used even half of the thinking part of his brain". [8] In 1936, American writer and broadcaster Lowell Thomas popularized the idea—in a foreword to Dale Carnegie's How to Win Friends and Influence People—by including the falsely precise percentage: "Professor William James of Harvard used to say that the average man develops only ten percent of his latent mental ability". [9]

In the 1970s, the Bulgarian-born psychologist and educator, Georgi Lozanov proposed the teaching method of suggestopedia believing "that we might be using only five to ten percent of our mental capacity". [10] [11] The origin of the myth has also been attributed to Wilder Penfield, the U.S.-born neurosurgeon who was the first director of Montreal Neurological Institute of McGill University. [12]

According to a related origin story, the ten percent myth most likely arose from a misunderstanding (or misrepresentation) of neurological research in the late 19th century or early 20th century. For example, the functions of many brain regions (especially in the cerebral cortex) are complex enough that the effects of damage are subtle, leading early neurologists to wonder what these regions did. [13] The brain was also discovered to consist mostly of glial cells, which seemed to have very minor functions. James W. Kalat, author of the textbook Biological Psychology, points out that neuroscientists in the 1930s knew about the large number of "local" neurons in the brain. The misunderstanding of the function of local neurons may have led to the ten percent myth. [14] The myth might have been propagated simply by a truncation of the idea that some use a small percentage of their brains at any given time. [1] In the same article in Scientific American, John Henley, a neurologist at the Mayo Clinic in Rochester, Minnesota states: "Evidence would show over a day you use 100 percent of the brain". [1]

Although parts of the brain have broadly understood functions, many mysteries remain about how brain cells (i.e., neurons and glia) work together to produce complex behaviors and disorders. Perhaps the broadest, most mysterious question is how diverse regions of the brain collaborate to form conscious experiences. So far, there is no evidence that there is one site for consciousness, which leads experts to believe that it is truly a collective neural effort. Therefore, as with James's idea that humans have untapped cognitive potential, it may be that a large number of questions about the brain have not been fully answered. [1]

Neurologist Barry Gordon describes the myth as false, adding, "we use virtually every part of the brain, and that (most of) the brain is active almost all the time." [1] Neuroscientist Barry Beyerstein sets out six kinds of evidence refuting the ten percent myth: [15]

  1. Studies of brain damage: If 10 percent of the brain is normally used, then damage to other areas should not impair performance. Instead, there is almost no area of the brain that can be damaged without loss of abilities. Even slight damage to small areas of the brain can have profound effects.
  2. Brain scans have shown that no matter what one is doing, all brain areas are always active. Some areas are more active at any one time than others, but barring brain damage, there is no part of the brain that is absolutely not functioning. Technologies such as positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) allow the activity of the living brain to be monitored. They reveal that even during sleep, all parts of the brain show some level of activity. Only in the case of serious damage does a brain have "silent" areas.
  3. The brain is enormously costly to the rest of the body, in terms of oxygen and nutrient consumption. It can require up to 20 percent of the body's energy—more than any other organ—despite making up only 2 percent of the human body weight. [16][17] If 90 percent of it were unnecessary, there would be a large survival advantage to humans with smaller, more efficient brains. If this were true, the process of natural selection would have eliminated the inefficient brain portions. It is also highly unlikely that a brain with so much redundant matter would have evolved in the first place given the historical risk of death in childbirth associated with the large brain size (and therefore skull size) of humans, [18] there would be a strong selection pressure against such a large brain size if only 10 percent was actually in use.
  4. Localization of function: Rather than acting as a single mass, the brain has distinct regions for different kinds of information processing. Decades of research have gone into mapping functions onto areas of the brain, and no function-less areas have been found.
  5. Microstructural analysis: In the single-unit recording technique, researchers insert a tiny electrode into the brain to monitor the activity of a single cell. If 90 percent of cells were unused, then this technique would have revealed that. : Brain cells that are not used have a tendency to degenerate. Hence if 90 percent of the brain were inactive, autopsy of normal adult brains would reveal large-scale degeneration.

In debunking the ten percent myth, Knowing Neurons editor Gabrielle-Ann Torre writes that using one hundred percent of one's brain would not be desirable either. Such unfettered activity would almost certainly trigger an epileptic seizure. [19] Torre writes that, even at rest, a person likely uses as much of his or her brain as reasonably possible through the default mode network, a widespread brain network that is active and synchronized even in the absence of any cognitive task. Thus, "large portions of the brain are never truly dormant, as the 10% myth might otherwise suggest."

Some proponents of the "ten percent of the brain" belief have long asserted that the "unused" ninety percent is capable of exhibiting psychic powers and can be trained to perform psychokinesis and extra-sensory perception. [3] [15] This concept is especially associated with the proposed field of "psionics" (psychic + electronics), a favorite project of the influential science fiction editor John W. Campbell, Jr in the 1950s and '60s. There is no scientifically verified body of evidence supporting the existence of such powers. [15] Such beliefs remain widespread among New Age proponents to the present day.

In 1980, Roger Lewin published an article in Science, "Is Your Brain Really Necessary?", [20] about studies by John Lorber on cerebral cortex losses. He reports the case of a Sheffield University student who had a measured IQ of 126 and passed a Mathematics Degree but who had hardly any discernible brain matter at all since his cortex was extremely reduced by hydrocephalus. The article led to the broadcast of a Yorkshire Television documentary of the same title, though it was about a different patient who had normal brain mass distributed in an unusual way in a very large skull. [21] Explanations were proposed for the first student's situation, with reviewers noting that Lorber's scans evidenced that the subject's brain mass was not absent, but compacted into the small space available, possibly compressed to a greater density than regular brain tissue. [22] [23]

Several books, films, and short stories have been written closely related to this myth. They include the 1986 film Flight of the Navigator the novel The Dark Fields and its 2011 film adaptation, Limitless (claiming 20 percent rather than the typical 10 percent) the 1991 film Defending Your Life the ninth book (White Night) of Jim Butcher's book series The Dresden Files the shōnen manga Psyren and the 2014 film Lucy—all of which operate under the notion that the rest of the brain could be accessed through use of a drug. [24] Lucy in particular depicts a character who gains increasingly godlike abilities once she surpasses 10 percent, though the film suggests that 10 percent represents brain capacity at a particular time rather than permanent usage.

The myth was examined on a 27 October 2010 episode of MythBusters. The hosts used magnetoencephalography and functional magnetic resonance imaging to scan the brain of someone attempting a complicated mental task, and found that over 10%, as much as 35%, was used during the course of their test. [25]

The ten percent brain myth occurs frequently in advertisements, [26] and in entertainment media it is often cited as fact.

In the season 2 episode of Fetch! With Ruff Ruffman, "Ruff's Case of Blues in the Brain", they debunked the theory.

In Teen Titans Go!, Beast Boy attempts to solve a Find-It puzzle by unlocking more percentage of brain.


Study finds brain areas involved in seeking information about bad possibilities

Ilya Monosov, PhD, shows data on brain activity obtained from monkeys as they grapple with uncertainty. Monosov and colleagues at Washington University School of Medicine in St. Louis have identified the brain regions involved in choosing whether to find out if a bad event is about to happen. Credit: Washington University Photographic Services

The term "doomscrolling" describes the act of endlessly scrolling through bad news on social media and reading every worrisome tidbit that pops up, a habit that unfortunately seems to have become common during the COVID-19 pandemic.

The biology of our brains may play a role in that. Researchers at Washington University School of Medicine in St. Louis have identified specific areas and cells in the brain that become active when an individual is faced with the choice to learn or hide from information about an unwanted aversive event the individual likely has no power to prevent.

The findings, published June 11 in Neuron, could shed light on the processes underlying psychiatric conditions such as obsessive-compulsive disorder and anxiety—not to mention how all of us cope with the deluge of information that is a feature of modern life.

"People's brains aren't well equipped to deal with the information age," said senior author Ilya Monosov, Ph.D., an associate professor of neuroscience, of neurosurgery and of biomedical engineering. "People are constantly checking, checking, checking for news, and some of that checking is totally unhelpful. Our modern lifestyles could be resculpting the circuits in our brain that have evolved over millions of years to help us survive in an uncertain and ever-changing world."

In 2019, studying monkeys, Monosov laboratory members J. Kael White, Ph.D., then a graduate student, and senior scientist Ethan S. Bromberg-Martin, Ph.D., identified two brain areas involved in tracking uncertainty about positively anticipated events, such as rewards. Activity in those areas drove the monkeys' motivation to find information about good things that may happen.

But it wasn't clear whether the same circuits were involved in seeking information about negatively anticipated events, like punishments. After all, most people want to know whether, for example, a bet on a horse race is likely to pay off big. Not so for bad news.

"In the clinic, when you give some patients the opportunity to get a genetic test to find out if they have, for example, Huntington's disease, some people will go ahead and get the test as soon as they can, while other people will refuse to be tested until symptoms occur," Monosov said. "Clinicians see information-seeking behavior in some people and dread behavior in others."

To find the neural circuits involved in deciding whether to seek information about unwelcome possibilities, first author Ahmad Jezzini, Ph.D., and Monosov taught two monkeys to recognize when something unpleasant might be headed their way. They trained the monkeys to recognize symbols that indicated they might be about to get an irritating puff of air to the face. For example, the monkeys first were shown one symbol that told them a puff might be coming but with varying degrees of certainty. A few seconds after the first symbol was shown, a second symbol was shown that resolved the animals' uncertainty. It told the monkeys that the puff was definitely coming, or it wasn't.

The researchers measured whether the animals wanted to know what was going to happen by whether they watched for the second signal or averted their eyes or, in separate experiments, letting the monkeys choose among different symbols and their outcomes.

Much like people, the two monkeys had different attitudes toward bad news: One wanted to know the other preferred not to. The difference in their attitudes toward bad news was striking because they were of like mind when it came to good news. When they were given the option of finding out whether they were about to receive something they liked—a drop of juice—they both consistently chose to find out.

"We found that attitudes toward seeking information about negative events can go both ways, even between animals that have the same attitude about positive rewarding events," said Jezzini, who is an instructor in neuroscience. "To us, that was a sign that the two attitudes may be guided by different neural processes."

By precisely measuring neural activity in the brain while the monkeys were faced with these choices, the researchers identified one brain area, the anterior cingulate cortex, that encodes information about attitudes toward good and bad possibilities separately. They found a second brain area, the ventrolateral prefrontal cortex, that contains individual cells whose activity reflects the monkeys' overall attitudes: yes for info on either good or bad possibilities vs. yes for intel on good possibilities only.

Understanding the neural circuits underlying uncertainty is a step toward better therapies for people with conditions such as anxiety and obsessive-compulsive disorder, which involve an inability to tolerate uncertainty.

"We started this study because we wanted to know how the brain encodes our desire to know what our future has in store for us," Monosov said. "We're living in a world our brains didn't evolve for. The constant availability of information is a new challenge for us to deal with. I think understanding the mechanisms of information seeking is quite important for society and for mental health at a population level."


The human brain’s remarkably low power consumption, and how computers might mimic its efficiency

A new paper from researchers working in the UK and Germany dives into how much power the human brain consumes when performing various tasks — and sheds light on how humans might one day build similar computer-based artificial intelligences. Mapping biological systems isn’t as sexy as the giant discoveries that propel new products or capabilities, but that’s because it’s the final discovery — not the decades of painstaking work that lays the groundwork — that tends to receive all the media attention.

This paper — Power Consumption During Neuronal Computation — will run in an upcoming issue of IEEE’s magazine, “Engineering Intelligent Electronic Systems Based on Computational Neuroscience.” Here at ET, we’ve discussed the brain’s computational efficiency on more than one occasion. Put succinctly, the brain is more power efficient than our best supercomputers by orders of magnitude — and understanding its structure and function is absolutely vital.

Is the brain digital or analog? Both

When we think about compute clusters in the modern era, we think about vast arrays of homogeneous or nearly-homogeneous systems. Sure, a supercomputer might combine two different types of processors — Intel Xeon + Nvidia Tesla, for example, or Intel Xeon + Xeon Phi — but as different as CPUs and GPUs are, they’re both still digital processors. The brain, it turns out, incorporates both digital and analog signaling into itself and the two methods are used in different ways. One potential reason why is that the power efficiency of the two methods varies dramatically depending on how much bandwidth you need and how far the signal needs to travel.

The efficiency of the two systems depends on what SNR (signal to noise) ratio you need to maintain within the system.

One of the other differences between existing supercomputers and the brain is that neurons aren’t all the same size and they don’t all perform the same function. If you’ve done high school biology you may remember that neurons are broadly classified as either motor neurons, sensory neurons, and interneurons. This type of grouping ignores the subtle differences between the various structures — the actual number of different types of neurons in the brain is estimated between several hundred and perhaps as many as 10,000 — depending on how you classify them.

Compare that to a modern supercomputer that uses two or three (at the very most) CPU architectures to perform calculations and you’ll start to see the difference between our own efforts to reach exascale-level computing and simulate the brain, and the actual biological structure. If our models approximated the biological functions, you’d have clusters of ARM Cortex M0 processors tied to banks of 15-core Xeons which pushed data to Tesla GPUs, which were also tied to some Intel Quark processors with another trunk shifting work to a group of IBM Power8 cores — all working in perfect harmony. Just as modern CPUs have vastly different energy efficiencies, die sizes, and power consumption levels, we see exactly the same trends in neurons.

All three charts are interesting, but it’s the chart on the far right that intrigues me most. Relative efficiency is graphed along the vertical axis while the horizontal axis has bits-per-second. Looking at it, you’ll notice that the most efficient neurons in terms of bits transferred per ATP molecule (ATP is a biological unit of energy equivalent to bits-per-watt in computing) is also one of the slowest in terms of bits per second. The neurons that can transfer the most data in terms of bits-per-second are also the least efficient.

Again, we see clear similarities between the design of modern microprocessors and the characteristics of biological organisms. That’s not to downplay the size of the gap or the dramatic improvements we’d have to make in order to offer similar levels of performance, but there’s no mystic sauce here — and analyzing the biological systems should give us better data on how to tweak semiconductor designs to approximate it.

A neuromorphic chip. Most attempts at emulating the human brain have so far mostly revolved around recreating neurons and synapses with crossbar switches.

Much of what we cover on ExtremeTech is cast in terms of the here-and-now. A better model of neuron energy consumption doesn’t really speak to any short-term goals — this won’t lead directly to a better microprocessor or a faster graphics card. It doesn’t solve the enormous problems we face in trying to shift conventional computing over to a model that more closely mimics the brain’s own function (neuromorphic design). But it does move us a critical step closer to the long-term goal of fully understanding (and possibly simulating) the brain. After all, you can’t simulate the function of an organ if you don’t understand how it signals or under which conditions it functions. [Read: A bionic prosthetic eye that speaks the language of your brain.]

Emulating a brain has at least one thing in common with emulating an instruction set in computing — the greater the gap between the two technologies, typically the larger the power cost to emulate it. The better we can analyze the brain, the better our chances of emulating one without needing industrial power stations to keep the lights on and the cooling running.


Forgetting uses more brain power than remembering

Summary: Intentional forgetting may require more attention to the unwanted information, rather than less.

Source: University of Texas at Austin

Choosing to forget something might take more mental effort than trying to remember it, researchers at The University of Texas at Austin discovered through neuroimaging.

These findings, published in the Journal of Neuroscience, suggest that in order to forget an unwanted experience, more attention should be focused on it. This surprising result extends prior research on intentional forgetting, which focused on reducing attention to the unwanted information through redirecting attention away from unwanted experiences or suppressing the memory’s retrieval.

“We may want to discard memories that trigger maladaptive responses, such as traumatic memories, so that we can respond to new experiences in more adaptive ways,” said Jarrod Lewis-Peacock, the study’s senior author and an assistant professor of psychology at UT Austin. “Decades of research has shown that we have the ability to voluntarily forget something, but how our brains do that is still being questioned. Once we can figure out how memories are weakened and devise ways to control this, we can design treatment to help people rid themselves of unwanted memories.”

Memories are not static. They are dynamic constructions of the brain that regularly get updated, modified and reorganized through experience. The brain is constantly remembering and forgetting information — and much of this happens automatically during sleep.

When it comes to intentional forgetting, prior studies focused on locating “hotspots” of activity in the brain’s control structures, such as the prefrontal cortex, and long-term memory structures, such as the hippocampus. The latest study focuses, instead, on the sensory and perceptual areas of the brain, specifically the ventral temporal cortex, and the patterns of activity there that correspond to memory representations of complex visual stimuli.

“We’re looking not at the source of attention in the brain, but the sight of it,” said Lewis-Peacock, who is also affiliated with the UT Austin Department of Neuroscience and the Dell Medical School.

Using neuroimaging to track patterns of brain activity, the researchers showed a group of healthy adults images of scenes and faces, instructing them to either remember or forget each image.

Their findings not only confirmed that humans have the ability to control what they forget, but that successful intentional forgetting required “moderate levels” of brain activity in these sensory and perceptual areas — more activity than what was required to remember.

“A moderate level of brain activity is critical to this forgetting mechanism. Too strong, and it will strengthen the memory too weak, and you won’t modify it,” said Tracy Wang, lead author of the study and a psychology postdoctoral fellow at UT Austin. “Importantly, it’s the intention to forget that increases the activation of the memory, and when this activation hits the ‘moderate level’ sweet spot, that’s when it leads to later forgetting of that experience.”

The researchers also found that participants were more likely to forget scenes than faces, which can carry much more emotional information, the researchers said.

“We’re learning how these mechanisms in our brain respond to different types of information, and it will take a lot of further research and replication of this work before we understand how to harness our ability to forget,” said Lewis-Peacock, who has begun a new study using neurofeedback to track how much attention is given to certain types of memories.

“This will make way for future studies on how we process, and hopefully get rid of, those really strong, sticky emotional memories, which can have a powerful impact on our health and well-being,” Lewis-Peacock said.

These findings, published in the Journal of Neuroscience, suggest that in order to forget an unwanted experience, more attention should be focused on it. This surprising result extends prior research on intentional forgetting, which focused on reducing attention to the unwanted information through redirecting attention away from unwanted experiences or suppressing the memory’s retrieval.

Source:
University of Texas at Austin
Media Contacts:
Rachel Griess – University of Texas at Austin
Image Source:
Neuroscience News image is in public domain.


Computation Power: Human Brain vs Supercomputer

A supercomputer is a computer with a high level of performance compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS).

Since 2017, there are supercomputers which can perform up to nearly a hundred quadrillion FLOPS. Since November 2017, all of the world's fastest 500 supercomputers run Linux-based operating systems. Additional research is being conducted in China, the United States, the European Union, Taiwan and Japan to build even faster, more powerful and more technologically superior exascale supercomputers.

Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). Throughout their history, they have been essential in the field of cryptanalysis.

At the moment of writing this article the world's fastest supercomputer is Summit or OLCF-4, developed by IBM for use at Oak Ridge National Laboratory, the fastest supercomputer in the world, capable of 200 petaflops.

Each one of its 4,608 nodes (9,216 IBM POWER9 CPUs and 27,648 NVIDIA Tesla GPUs) has over 600 GB of coherent memory (6×16 = 96 GB HBM2 plus 2×8×32 = 512 GB DDR4 SDRAM) which is addressable by all CPUs and GPUs plus 800 GB of non-volatile RAM that can be used as a burst buffer or as extended memory.The POWER9 CPUs and Volta GPUs are connected using NVIDIA's high speed NVLink.

This allows for a heterogeneous computing model. To provide a high rate of data throughput, the nodes will be connected in a non-blocking fat-tree topology using a dual-rail Mellanox EDR InfiniBand interconnect for both storage and inter-process communications traffic which delivers both 200Gb/s bandwidth between nodes and in-network computing acceleration for communications frameworks such as MPI and SHMEM/PGAS.

Brains Are Very Different From Computers

Our miraculous brains operate on the next order higher. Although it is impossible to precisely calculate, it is postulated that the human brain operates at 1 exaFLOP, which is equivalent to a billion billion calculations per second.

When we discuss computers, we are referring to meticulously designed machines that are based on logic, reproducibility, predictability, and math. The human brain, on the other hand, is a tangled, seemingly random mess of neurons that do not behave in a predictable manner.

The brain is both hardware and software, whereas there is an inherent different in computers. The same interconnected areas, linked by billions of neurons and perhaps trillions of glial cells, can perceive, interpret, store, analyze, and redistribute at the same time. Computers, by their very definition and fundamental design, have some parts for processing and others for memory the brain doesn’t make that separation, which makes it hugely efficient.

The same calculations and processes that might take a computer a few millions steps can be achieved by a few hundred neuron transmissions, requiring far less energy and performing at a far greater efficiency. The amount of energy required to power computations by the world’s fastest supercomputer would be enough to power a building the human brain achieves the same processing speeds from the same energy as is required to charge a dim light bulb.

One of the things that truly sets brains apart, aside from their clear advantage in raw computing power, is the flexibility that it displays. Essentially, the human brain can rewire itself, a feat more formally known as neuroplasticity. Neurons are able to disconnect and reconnect with others, and even change in their basic features, something that a carefully constructed computer cannot do.

Notes and references


The Causes of Information Overload

Brain overload stems from a variety of factors, each of which arises from taking in new information. The mind has a limited capacity for attending to information at any given time and is inclined toward novelty in its environment. The combination of limited attention and seeking originality is problematic in our modern context where rapid exposure to information is ubiquitous through easy access to electronic devices and social media.

Despite the brain’s problematic disposition, brain overload isn’t guaranteed to happen because of an excess of information. According to a Pew Research Center survey titled “Information Overload,” 79% of respondents found that access to many kinds of information gave them a sense of control over their lives. The survey found that certain circumstances — and even certain institutions — can be what trigger the effects of overload. Fifty-six percent of respondents reported higher levels of stress caused by governmental agencies, schools, and banks because of the information gathering processes associated with them.

This data set makes sense considering Levitin’s definitional work. While it seems natural that most Americans would want access to updated and continuous information through their devices — smartphones, personal computers, and tablets — it’s also unsurprising that most respondents associated stress with the different kinds of information they receive. What’s more, a near majority of these respondents reported trouble with keeping up with the information they had access to. As these conditions will only persist as technological innovations continue, we might find solutions to the problem.


What percentage of our brain do we use?

The brain is the most complex organ in the human body. Many believe that a person only ever uses 10 percent of their brain. Is there any truth to this?

A person’s brain determines how they experience the world around them. The brain weighs about 3 pounds and contains around 100 billion neurons — cells that carry information.

In this article, we explore how much of the brain a person uses. We also bust some widely held myths and reveal some interesting facts about the brain.

Share on Pinterest Studies have debunked the myth that humans use only 10 percent of their brain.

According to a survey from 2013, around 65 percent of Americans believe that we only use 10 percent of our brain.

But this is just a myth, according to an interview with neurologist Barry Gordon in Scientific American. He explained that the majority of the brain is almost always active.

The 10 percent myth was also debunked in a study published in Frontiers in Human Neuroscience.

One common brain imaging technique, called functional magnetic resonance imaging (fMRI), can measure activity in the brain while a person is performing different tasks.

Using this and similar methods, researchers show that most of our brain is in use most of the time, even when a person is performing a very simple action.

A lot of the brain is even active when a person is resting or sleeping.

The percentage of the brain in use at any given time varies from person to person. It also depends on what a person is doing or thinking about.

It’s not clear how this myth began, but there are several possible sources.

In an article published in a 1907 edition of the journal Science, psychologist and author William James argued that humans only use part of their mental resources. However, he did not specify a percentage.

The figure was referenced in Dale Carnegie’s 1936 book How to Win Friends and Influence People. The myth was described as something the author’s college professor used to say.

There is also a belief among scientists that neurons make up around 10 percent of the brain’s cells. This may have contributed to the 10 percent myth.

The myth has been repeated in articles, TV programs, and films, which helps to explain why it is so widely believed.

Like any other organ, the brain is affected by a person’s lifestyle, diet, and the amount that they exercise.

To improve the health and function of the brain, a person can do the following things.

Eat a balanced diet

Eating well improves overall health and well-being. It also reduces the risk of developing health issues that may lead to dementia, including:

The following foods promote brain health:

  • Fruits and vegetables with dark skins. Some are rich in vitamin E, such as spinach, broccoli, and blueberries . Others are rich in beta carotene, including red peppers and sweet potatoes. Vitamin E and beta carotene promote brain health.
  • Oily fish. These types of fish, such as salmon, mackerel, and tuna, are rich in omega-3 fatty acids, which may support cognitive function.
  • Walnuts and pecans. They are rich in antioxidants, which promote brain health.

There is a selection of walnuts and pecans available for purchase online.

Exercise regularly

Regular exercise also reduces the risk of health problems that may lead to dementia.

Cardiovascular activities, such as walking briskly for 30 minutes a day, can be enough to reduce the risk of brain function declining.

Other accessible and inexpensive options include:

Keep the brain active

The more a person uses their brain, the better their mental functions become. For this reason, brain training exercises are a good way to maintain overall brain health.

A recent study conducted over 10 years found that people who used brain training exercises reduced the risk of dementia by 29 percent.

The most effective training focused on increasing the brain’s speed and ability to process complex information quickly.

There are a number of other popular myths about the brain. These are discussed and dispelled below.

Left-brained vs. right-brained

Many believe that a person is either left-brained or right-brained, with right-brained people being more creative, and left-brained people more logical.

However, research suggests that this is a myth — people are not dominated by one brain hemisphere or the other. A healthy person is constantly using both hemispheres.

It is true that the hemispheres have different tasks. For instance, a study in PLOS Biology discussed the extent to which the left hemisphere is involved in processing language, and the right in processing emotions.

Alcohol and the brain

Long-term alcoholism can lead to a number of health problems, including brain damage.

It is not, however, as simple as saying that drinking alcohol kills brain cells — this is a myth. The reasons for this are complicated.

If a woman drinks too much alcohol while pregnant, it can affect the brain development of the fetus, and even cause fetal alcohol syndrome.

The brains of babies with this condition may be smaller and often contain fewer brain cells. This may lead to difficulties with learning and behavior.

Subliminal messages

Research suggests that subliminal messages can provoke an emotional response in people unaware that they had received emotional stimulus. But can subliminal messages help a person to learn new things?

A study published in Nature Communications found that hearing recordings of vocabulary when sleeping could improve a person’s ability to remember the words. This was only the case in people who had already studied the vocabulary.

Researchers noted that hearing information while asleep cannot help a person to learn new things. It may only improve recall of information learned earlier, while awake.

Brain wrinkles

The human brain is covered in folds, commonly known as wrinkles. The dip in each fold is called the sulcus, and the raised part is called the gyrus.

Some people believe that a new wrinkle is formed every time a person learns something. This is not the case.

The brain starts to develop wrinkles before a person is born, and this process continues throughout childhood.

The brain is constantly making new connections and breaking old ones, even in adulthood.


Can I increase my brain power?

W hat happens when you attach several electrodes to your forehead, connect them via wires to a nine-volt battery and resistor, ramp up the current and send an electrical charge directly into your brain? Most people would be content just to guess, but last summer a 33-year-old from Alabama named Anthony Lee decided to find out. "Here we go… oooahh, that stings a little!" he says, in one of the YouTube videos recording his exploits. "Whoa. That hurts… Ow!" The video cuts out. When Lee reappears, the electrodes are gone: "Something very strange happened," he says thoughtfully. "It felt like something popped." (In another video, he reports a sudden white flash in his visual field, which he describes, in a remarkably calm voice, as "cool".) You might conclude from this that Lee is a very foolish person, but the quest he's on is one that has occupied scientists, philosophers and fortune-hunters for centuries: to find some artificial way to improve upon the basic cognitive equipment we're born with, and thus become smarter and maintain mental sharpness into old age. "It started with Limitless," Lee told me – the 2011 film in which an author suffering from writer's block discovers a drug that can supercharge his faculties. "I figured, I'm a pretty average-intelligence guy, so I could use a little stimulation."

The scientific establishment, it's fair to say, remains far from convinced that it's possible to enhance your brain's capacities in a lasting way – whether via electrical jolts, brain-training games, dietary supplements, drugs or anything else. But that hasn't impeded the growth of a huge industry – and thriving amateur subculture – of "neuro-enhancement", which, according to the American Psychological Association, is worth $1bn a year. "Brain fitness technology" has been projected to be worth up to $8bn in 2015 as baby boomers age. Anthony Lee belongs to the sub-subculture of DIY transcranial direct-current stimulation, or tDCS, whose members swap wiring diagrams and cautionary tales online, though if that makes you queasy, you can always pay £179 for Foc.us, a readymade tDCS headset that promises to "make your synapses fire faster" and "excite your prefrontal cortex", so that you can "get the edge in online gaming". Or you could start spending time on a brain-training site such as Lumosity or HappyNeuron, the latter boasting games "scientifically designed to stimulate your cognitive functions". Or start drinking Brain TonIQ or Brain Candy or Nawgan or NeuroPassion, or any of the other "functional drinks" that promise to push you past your cognitive limits.

One problem with Brain TonIQ is that it's disgusting, albeit not as disgusting as Nawgan ("What To Drink When You Want To Think"), which tastes so metallic, it's like drinking the can that it comes in. For the last two weeks, I've been working through a succession of these drinks – and a packet of Focus Formula herbal pills – while wearing a NeuroSky MindWave headset, which thankfully isn't sending current to my brain, but claims to be monitoring my brainwaves via a sensor on my forehead. This is a system of "neurofeedback": the headset is linked to my laptop, which plays the sound of Buddhist chanting through headphones when my attention wavers, the pitch of the chanting falls, so I'm supposedly being trained to concentrate. I've been playing brain-training games daily. At the start of all this, I took a "culture-neutral" intelligence test, and scored 129, on a scale derived from IQ (which stops being meaningfully measurable around 200). It's not technically an IQ score – and IQ scores are very questionable things, anyway – but if I can boost it by a few points, I'll be willing to declare victory.

Yes, yes, I'm aware that this is all hopelessly unscientific. The intelligence test wasn't a formal one the placebo effect could be enormous and even if some of my tactics worked, I'd have no way of identifying which. But in the world of cognitive enhancement, good science regularly takes a back seat to speculative self-experimentation. Dwell on the science and it's liable to make you anxious: according to one study, a key ingredient in Brain TonIQ, dimethylaminoethanol, has been shown to decrease the average lifespan of aged quail. When you're trying to become superhuman at thinking, there are some things it's best not to think about.

The big conundrum at the core of the brain-enhancement debate is this: what counts as "getting smarter"? Many of the claims made by the industry aren't false, but rather boringly true: of course online training games "stimulate your cognitive functions" and "change your brain", since pretty much everything does. And nobody disputes that it's possible to learn new skills, such as speaking German, or riding a bike nor that taking a substance such as Modafinil or Adderall, now routinely deployed by some students as "study drugs", will temporarily supercharge your focus. It's also pretty easy – relatively speaking – to boost your working memory, for example by learning tricks to remember long strings of digits, as described by Joshua Foer in his bestseller Moonwalking With Einstein. But those tricks aren't transferable: ask a champion digit-memoriser to solve a cryptic crossword, and he'll probably do no better than the rest of us.

The holy grail is to find a way of increasing "fluid intelligence", our underlying capacity to hold information in conscious memory and then manipulate it in order to solve complex problems or come up with new ideas. Fluid intelligence is what IQ tests try to measure – albeit, historically, with all sorts of cultural biases – and the implications of improving it could be huge. "There are approximately 10 million scientists in the world," Nick Bostrom, of Oxford University's Future of Humanity Institute, told Time magazine a while back. "If you improve their cognition by 1%, the gain would hardly be noticeable. But it could be equivalent to instantly creating 100,000 new scientists." But even how to think about this in the first place is a tricky question, as the Imperial College neuroscientist Adam Hampshire points out, because "general intelligence" is a construct: it's an idea we use to group together certain aspects of brainpower, so it's unlikely to be related to just one aspect or system in the brain.

Until only six years ago, when it came to the possibility of increasing fluid intelligence, the verdict was almost uniformly pessimistic. But then, in 2008, a pair of workaholic psychologists from Switzerland, Susanne Jaeggi and her boyfriend Martin Buschkuehl, published a study that sent eyebrows shooting upwards, and that's still being fiercely debated today. "That study was the D-day invasion," says the science writer Dan Hurley, whose book Smarter: The New Science Of Building Brain Power will be published in the UK this month. "That really put down a marker that said: this is real. You can really do this."

The Jaeggi study relied on an especially vicious brain-training game known as the "dual n-back". You can try it for yourself at soakyourhead.com, but I can't recommend it, because it's hellish. "The first time they try it, everybody's impression is, 'Oh, this is impossible, this is crazy, this is awful,'" says Hurley. "It feels like someone just asked you to pick up a car." The game works like this: you hear a voice, slowly reciting a sequence of letters: "B… K… P… K…" Whenever you hear a letter that's the same as the one before last, you press the L key on your computer. So far, so tolerable – but at the same time, you're playing a visual version of the same game, in which one of a set of eight squares lights up in orange when the illuminated square is the same as the one before last, you press your computer's A key. Doing both these tasks at once feels savagely unpleasant, but if you make it through to the end, something worse is in store: on the next level, you do the same thing, except you're looking for matches two times before last. If you can make it to the next stage – looking for matches three times before last – you're probably a witch.

'I’ve been working through these drinks while wearing a NeuroSky MindWave headset, which claims to be monitoring my brainwaves via a sensor on my forehead.' Photograph: Christopher Lane for the Guardian

Jaeggi and Buschkuehl persuaded undergraduates at the University of Bern, and later other subjects, to submit to the dual n-back for several minutes a day, over weeks. They tested their fluid intelligence using Raven's Progressive Matrices, a widely respected test involving visual pattern manipulations. (Think of those old newspaper ads for Mensa, and you won't be far off.) What they discovered upended the conventional wisdom: after 19 days of training, their subjects recorded a 44% average performance boost on the Raven test. By then, the first generation of commercial brain games had been largely discredited: playing Dr Kawashima's Brain Training on your Nintendo, it's now clear, will only make you better at playing Dr Kawashima's Brain Training on your Nintendo. But playing the dual n-back, it appeared, could truly make people more intelligent.

There are few surer ways to create a firestorm among psychologists and neuroscientists, it turns out, than to claim such impressive changes to an aspect of intelligence long considered fixed. Some in the field compared the Jaeggi findings to cold fusion, which is as close as you can get to accusing a fellow academic of hallucinating while remaining minimally polite. Some prominently reported attempts to replicate the Jaeggi findings failed, but others found similar positive results in schoolchildren and the elderly. In 2013, a meta-analysis based on 23 studies found "no convincing evidence of the generalisation of working memory training to other skills", though there's been debate about the selection criteria involved. An earlier British study, conducted with the BBC show Bang Goes The Theory, reached similar conclusions, but didn't focus on the same kind of game. Interviews conducted for Hurley's book show the scientific establishment to be well and truly divided. It's all "a bit of a mess", Adam Hampshire says, due to the proliferation of numerous small-scale studies, which makes false positives far more likely: "If 1,000 people roll a dice 16 times, some of them are going to get just high numbers" – and those are the studies that get published.

"I know it sounds as if we're just pouring cold water on this, but the thing is, we've been disappointed so many times before," adds James Thompson, a senior honorary lecturer in psychology at University College London, and a prominent sceptic. "About 40 years ago, it was hyperbaric oxygen for pregnant women, so they'd give birth to geniuses. I got transcranial stimulation at Guy's hospital in 1969, as a guinea pig. But then you do the hard research and you don't see much difference."

Yet it would be very strange, ultimately, if it were to prove utterly impossible to modify your brain's basic capacities through any form of training. The brain is a physical organ, and its processes are physical processes why should the capacities we label "fluid intelligence" be uniquely immune to environmental impacts? Your intelligence is surely heavily influenced by your genes – but so (for example) is your height, and that can be affected by environmental factors, specifically how well you're nourished as a child. "Some people want to assert that it's unchangeable, as if that's hard science," Hurley says. "But it's actually a much more magical way of thinking about the mind to say that the environment can't possibly have any effect."

After four days of 20 minutes doing the dual n-back, I have no idea if it's working, but it's definitely hurting. Sadly, that's probably a good sign, and it's one thing on which researchers do tend to agree: if intelligence can be boosted by brain games – a very big if – they almost certainly won't be enjoyable ones. Unless the task involved keeps getting harder, so that you never quite feel you've got the hang of it, there's no way you'll get more intelligent. When you master a task, your brain becomes more efficient at performing it. And "efficiency is not your friend when it comes to cognitive improvement", as Andrea Kuszewski, a behavioural therapist trained in neuroscience, and a believer in the promise of intelligence-boosting, puts it. She points to studies of people playing Tetris, which showed an increase in cortical activity and cortical thickness as they struggled to get to grips with the game – but a decrease in both once they'd mastered it.

This is the closest thing you're going to get to a solid, science-backed piece of advice, when it comes to exercising your brain: don't let things get too fun. Once you're pretty good at sudoku, stop doing sudoku switch to something you're worse at. Keep seeking challenges that make your head hurt. Nobody ever said getting smarter was going to be easy.

There could be ways to become smarter more quickly, though – so long as you're willing, like Anthony Lee, to do slightly nerve-racking things with electricity. ("I'm not afraid to experiment," Lee says: as a child, he was always the one to accept dares. "But I'm a relatively responsible adult, so if I felt there was real danger, I don't think I'd do it." Then again, he adds, with a laugh, "I really don't understand all that much about electronics.") At a research lab in New Mexico a few years ago, according to a report in Nature, volunteers wearing small wet sponges on their temples played Darwars Ambush!, a soldier-training game sponsored by the US Defense Advanced Research Projects Agency (Darpa). Darwars Ambush! involves navigating virtual landscapes reminiscent of urban war zones, learning to spot hidden gunmen or deadly explosive devices. After just a few hours' training, players who'd been receiving a 2-milliamp current through the sponges on their heads showed twice as much improvement on the game than those getting a 20th of that.

The idea of electrically stimulating human bodies goes back at least to the 19th century, when it was used to cure "melancholy" much later, electroconvulsive therapy would be used to induce seizures in psychiatric patients. Since then, studies have demonstrated that a gentler approach, transcranial magnetic stimulation, can alleviate serious depression and perhaps even trigger bursts of "savant" intellectual prowess, reminiscent of the kind depicted in Rain Man. "How long," wondered the New York Times in 2003, "before Americans are walking around with humming antidepression helmets and math-enhancing 'hair-dryers' on their heads?"

The answer: one decade, if you count the Foc.us tDCS headset, now on sale in the US and UK. The Foc.us describes itself as an accessory for gamers, reportedly since it's easier to comply with medical regulations that way. But the implicit promise is the same as for the Darpa initiative, and Lee's home-based tinkering: by temporarily boosting cognitive capacity, tDCS might hugely speed up the learning process. It has also been shown, in one study, to induce "a feeling of anticipated challenge and [a] strong motivation to overcome it", which would presumably aid learning, too.

Precisely why tDCS works remains partly mysterious – though it's not enormously surprising that neurons, which transmit information via electrical signals, might do so faster and better with an electrical boost from outside. Dan Hurley quotes Roy Hoshi Hamilton, director of the Laboratory for Cognition and Neural Stimulation at the University of Pennsylvania: "What is a thought? A thought is what happens when some pattern of firing of neurons has happened in your brain. So if you have a technology that makes it ever so slightly easier for lots and lots of these neurons… to do their thing, then it doesn't seem so far-fetched that such a technology, be it ever so humble, would have an effect on cognition." Repeat the process enough times, and you'd expect the brain's neural pathways to change, too.

All of which is potentially dangerous, if you do it wrong. You might feel inclined to stick to brain games instead, on the rationale that even if they don't work, they can't do any harm. But that position's arguably misguided. Your time is finite, and every hour you spend wrestling with the dual n-back is one you could have spent doing any of the more mundane things that will certainly promote brain health: doing sufficient physical exercise, getting enough sleep, and preparing and eating healthy food. "Live a good clean life, get proper sleep and you'll be at the peak of whatever your potential performance is," James Thompson suggests. "And we use our intelligence to do specific tasks, so don't waste your time remembering numbers backwards – read a good statistics book. Learn about modern genetics. Read a history of intellectual discovery. Whenever people talk about spending 24 hours on the dual n-back, I think, well, yes, but what else could I do with 24 hours?"

I didn't spend 24 hours on the dual n-back, or even 12, but I did spend as long as I ever plan to, pumped up on Brain TonIQ or Brain Candy, both of which seemed to give me mild headaches. (I bought these drinks in the US, and not all are available in the UK: the Neuro range, including Neuro Passion and Neuro Sonic, has been temporarily withdrawn from British sale, because ingredients in some of the range don't have regulators' approval.)

After two weeks, I retook the intelligence test, based on the Raven matrices, and scored four points higher, at 133. Which proves absolutely nothing at all, though it did make me feel briefly smug.

I plan on never doing the dual n-back again, but I might take Andrea Kuszewski's advice and try turning off my smartphone's maps function, forcing myself to navigate the old-fashioned way. "Look, technology like GPS is great," Kuszewski says, "but there are always costs. If you used to walk to work but then you bought a car and you start driving everywhere instead, well, it'd be a lot easier. But everyone knows your body's going to suffer as a result! Why should it be any different with your brain?"

The long-sought secret of boosting intelligence could turn out to be straightforward – wherever possible, do things the harder way. I know, I know: it's not what I wanted to hear, either.


Pumped for action

But if that's true, how do we explain why Karpov grew too skinny to compete in his chess competition? The general consensus is that it mostly comes down to stress and reduced food consumption, not mental exhaustion.

Elite chess players are under intense pressure that causes stress, which can lead to an elevated heart rate, faster breathing and sweating. Combined, these effects burn calories over time. In addition, elite players must sometimes sit for as much as 8 hours at a time, which can disrupt their regular eating patterns. Energy-loss is also something that stage performers and musicians might experience, since they&rsquore often under high-stress, and have disrupted eating schedules.

"Keeping your body pumped up for action for long periods of time is very energy demanding,&rdquo Messier explained. &ldquoIf you can&rsquot eat as often or as much as you can or would normally — then you might lose weight.&rdquo

So, the verdict is in: Sadly, thinking alone won't make us slim. But when you next find yourself starved of inspiration, one extra square of chocolate probably won't hurt.


Related

The Brain from Inside Out: 2020 Kavli Keynote Address Shines Light on Cognition

New York University’s György Buzsáki proposes that our newborn brains are filled with largely random patterns, which he refers to as an “inside-out” framework. More

New Research From Psychological Science

A sample of research exploring brain networks involved in sustained attention and individual differences in music reward. More

Genetic Variation Contributes to Individual Differences in Pleasure

Differences in how our brains respond when we’re anticipating a financial reward are due, in part, to genetic differences. More


Related

The Brain from Inside Out: 2020 Kavli Keynote Address Shines Light on Cognition

New York University’s György Buzsáki proposes that our newborn brains are filled with largely random patterns, which he refers to as an “inside-out” framework. More

New Research From Psychological Science

A sample of research exploring brain networks involved in sustained attention and individual differences in music reward. More

Genetic Variation Contributes to Individual Differences in Pleasure

Differences in how our brains respond when we’re anticipating a financial reward are due, in part, to genetic differences. More


Contents

A likely origin for the "ten percent myth" is the reserve energy theories of Harvard psychologists William James and Boris Sidis who, in the 1890s, tested the theory in the accelerated raising of child prodigy William Sidis. Thereafter, James told lecture audiences that people only meet a fraction of their full mental potential, which is considered a plausible claim. [5] The concept gained currency by circulating within the self-help movement of the 1920s for example, the book Mind Myths: Exploring Popular Assumptions About the Mind and Brain includes a chapter on the ten percent myth that shows a self-help advertisement from the 1929 World Almanac with the line "There is NO LIMIT to what the human brain can accomplish. Scientists and psychologists tell us we use only about TEN PERCENT of our brain power." [6] This became a particular "pet idea" [7] of science fiction writer and editor John W. Campbell, who wrote in a 1932 short story that "no man in all history ever used even half of the thinking part of his brain". [8] In 1936, American writer and broadcaster Lowell Thomas popularized the idea—in a foreword to Dale Carnegie's How to Win Friends and Influence People—by including the falsely precise percentage: "Professor William James of Harvard used to say that the average man develops only ten percent of his latent mental ability". [9]

In the 1970s, the Bulgarian-born psychologist and educator, Georgi Lozanov proposed the teaching method of suggestopedia believing "that we might be using only five to ten percent of our mental capacity". [10] [11] The origin of the myth has also been attributed to Wilder Penfield, the U.S.-born neurosurgeon who was the first director of Montreal Neurological Institute of McGill University. [12]

According to a related origin story, the ten percent myth most likely arose from a misunderstanding (or misrepresentation) of neurological research in the late 19th century or early 20th century. For example, the functions of many brain regions (especially in the cerebral cortex) are complex enough that the effects of damage are subtle, leading early neurologists to wonder what these regions did. [13] The brain was also discovered to consist mostly of glial cells, which seemed to have very minor functions. James W. Kalat, author of the textbook Biological Psychology, points out that neuroscientists in the 1930s knew about the large number of "local" neurons in the brain. The misunderstanding of the function of local neurons may have led to the ten percent myth. [14] The myth might have been propagated simply by a truncation of the idea that some use a small percentage of their brains at any given time. [1] In the same article in Scientific American, John Henley, a neurologist at the Mayo Clinic in Rochester, Minnesota states: "Evidence would show over a day you use 100 percent of the brain". [1]

Although parts of the brain have broadly understood functions, many mysteries remain about how brain cells (i.e., neurons and glia) work together to produce complex behaviors and disorders. Perhaps the broadest, most mysterious question is how diverse regions of the brain collaborate to form conscious experiences. So far, there is no evidence that there is one site for consciousness, which leads experts to believe that it is truly a collective neural effort. Therefore, as with James's idea that humans have untapped cognitive potential, it may be that a large number of questions about the brain have not been fully answered. [1]

Neurologist Barry Gordon describes the myth as false, adding, "we use virtually every part of the brain, and that (most of) the brain is active almost all the time." [1] Neuroscientist Barry Beyerstein sets out six kinds of evidence refuting the ten percent myth: [15]

  1. Studies of brain damage: If 10 percent of the brain is normally used, then damage to other areas should not impair performance. Instead, there is almost no area of the brain that can be damaged without loss of abilities. Even slight damage to small areas of the brain can have profound effects.
  2. Brain scans have shown that no matter what one is doing, all brain areas are always active. Some areas are more active at any one time than others, but barring brain damage, there is no part of the brain that is absolutely not functioning. Technologies such as positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) allow the activity of the living brain to be monitored. They reveal that even during sleep, all parts of the brain show some level of activity. Only in the case of serious damage does a brain have "silent" areas.
  3. The brain is enormously costly to the rest of the body, in terms of oxygen and nutrient consumption. It can require up to 20 percent of the body's energy—more than any other organ—despite making up only 2 percent of the human body weight. [16][17] If 90 percent of it were unnecessary, there would be a large survival advantage to humans with smaller, more efficient brains. If this were true, the process of natural selection would have eliminated the inefficient brain portions. It is also highly unlikely that a brain with so much redundant matter would have evolved in the first place given the historical risk of death in childbirth associated with the large brain size (and therefore skull size) of humans, [18] there would be a strong selection pressure against such a large brain size if only 10 percent was actually in use.
  4. Localization of function: Rather than acting as a single mass, the brain has distinct regions for different kinds of information processing. Decades of research have gone into mapping functions onto areas of the brain, and no function-less areas have been found.
  5. Microstructural analysis: In the single-unit recording technique, researchers insert a tiny electrode into the brain to monitor the activity of a single cell. If 90 percent of cells were unused, then this technique would have revealed that. : Brain cells that are not used have a tendency to degenerate. Hence if 90 percent of the brain were inactive, autopsy of normal adult brains would reveal large-scale degeneration.

In debunking the ten percent myth, Knowing Neurons editor Gabrielle-Ann Torre writes that using one hundred percent of one's brain would not be desirable either. Such unfettered activity would almost certainly trigger an epileptic seizure. [19] Torre writes that, even at rest, a person likely uses as much of his or her brain as reasonably possible through the default mode network, a widespread brain network that is active and synchronized even in the absence of any cognitive task. Thus, "large portions of the brain are never truly dormant, as the 10% myth might otherwise suggest."

Some proponents of the "ten percent of the brain" belief have long asserted that the "unused" ninety percent is capable of exhibiting psychic powers and can be trained to perform psychokinesis and extra-sensory perception. [3] [15] This concept is especially associated with the proposed field of "psionics" (psychic + electronics), a favorite project of the influential science fiction editor John W. Campbell, Jr in the 1950s and '60s. There is no scientifically verified body of evidence supporting the existence of such powers. [15] Such beliefs remain widespread among New Age proponents to the present day.

In 1980, Roger Lewin published an article in Science, "Is Your Brain Really Necessary?", [20] about studies by John Lorber on cerebral cortex losses. He reports the case of a Sheffield University student who had a measured IQ of 126 and passed a Mathematics Degree but who had hardly any discernible brain matter at all since his cortex was extremely reduced by hydrocephalus. The article led to the broadcast of a Yorkshire Television documentary of the same title, though it was about a different patient who had normal brain mass distributed in an unusual way in a very large skull. [21] Explanations were proposed for the first student's situation, with reviewers noting that Lorber's scans evidenced that the subject's brain mass was not absent, but compacted into the small space available, possibly compressed to a greater density than regular brain tissue. [22] [23]

Several books, films, and short stories have been written closely related to this myth. They include the 1986 film Flight of the Navigator the novel The Dark Fields and its 2011 film adaptation, Limitless (claiming 20 percent rather than the typical 10 percent) the 1991 film Defending Your Life the ninth book (White Night) of Jim Butcher's book series The Dresden Files the shōnen manga Psyren and the 2014 film Lucy—all of which operate under the notion that the rest of the brain could be accessed through use of a drug. [24] Lucy in particular depicts a character who gains increasingly godlike abilities once she surpasses 10 percent, though the film suggests that 10 percent represents brain capacity at a particular time rather than permanent usage.

The myth was examined on a 27 October 2010 episode of MythBusters. The hosts used magnetoencephalography and functional magnetic resonance imaging to scan the brain of someone attempting a complicated mental task, and found that over 10%, as much as 35%, was used during the course of their test. [25]

The ten percent brain myth occurs frequently in advertisements, [26] and in entertainment media it is often cited as fact.

In the season 2 episode of Fetch! With Ruff Ruffman, "Ruff's Case of Blues in the Brain", they debunked the theory.

In Teen Titans Go!, Beast Boy attempts to solve a Find-It puzzle by unlocking more percentage of brain.


Can I increase my brain power?

W hat happens when you attach several electrodes to your forehead, connect them via wires to a nine-volt battery and resistor, ramp up the current and send an electrical charge directly into your brain? Most people would be content just to guess, but last summer a 33-year-old from Alabama named Anthony Lee decided to find out. "Here we go… oooahh, that stings a little!" he says, in one of the YouTube videos recording his exploits. "Whoa. That hurts… Ow!" The video cuts out. When Lee reappears, the electrodes are gone: "Something very strange happened," he says thoughtfully. "It felt like something popped." (In another video, he reports a sudden white flash in his visual field, which he describes, in a remarkably calm voice, as "cool".) You might conclude from this that Lee is a very foolish person, but the quest he's on is one that has occupied scientists, philosophers and fortune-hunters for centuries: to find some artificial way to improve upon the basic cognitive equipment we're born with, and thus become smarter and maintain mental sharpness into old age. "It started with Limitless," Lee told me – the 2011 film in which an author suffering from writer's block discovers a drug that can supercharge his faculties. "I figured, I'm a pretty average-intelligence guy, so I could use a little stimulation."

The scientific establishment, it's fair to say, remains far from convinced that it's possible to enhance your brain's capacities in a lasting way – whether via electrical jolts, brain-training games, dietary supplements, drugs or anything else. But that hasn't impeded the growth of a huge industry – and thriving amateur subculture – of "neuro-enhancement", which, according to the American Psychological Association, is worth $1bn a year. "Brain fitness technology" has been projected to be worth up to $8bn in 2015 as baby boomers age. Anthony Lee belongs to the sub-subculture of DIY transcranial direct-current stimulation, or tDCS, whose members swap wiring diagrams and cautionary tales online, though if that makes you queasy, you can always pay £179 for Foc.us, a readymade tDCS headset that promises to "make your synapses fire faster" and "excite your prefrontal cortex", so that you can "get the edge in online gaming". Or you could start spending time on a brain-training site such as Lumosity or HappyNeuron, the latter boasting games "scientifically designed to stimulate your cognitive functions". Or start drinking Brain TonIQ or Brain Candy or Nawgan or NeuroPassion, or any of the other "functional drinks" that promise to push you past your cognitive limits.

One problem with Brain TonIQ is that it's disgusting, albeit not as disgusting as Nawgan ("What To Drink When You Want To Think"), which tastes so metallic, it's like drinking the can that it comes in. For the last two weeks, I've been working through a succession of these drinks – and a packet of Focus Formula herbal pills – while wearing a NeuroSky MindWave headset, which thankfully isn't sending current to my brain, but claims to be monitoring my brainwaves via a sensor on my forehead. This is a system of "neurofeedback": the headset is linked to my laptop, which plays the sound of Buddhist chanting through headphones when my attention wavers, the pitch of the chanting falls, so I'm supposedly being trained to concentrate. I've been playing brain-training games daily. At the start of all this, I took a "culture-neutral" intelligence test, and scored 129, on a scale derived from IQ (which stops being meaningfully measurable around 200). It's not technically an IQ score – and IQ scores are very questionable things, anyway – but if I can boost it by a few points, I'll be willing to declare victory.

Yes, yes, I'm aware that this is all hopelessly unscientific. The intelligence test wasn't a formal one the placebo effect could be enormous and even if some of my tactics worked, I'd have no way of identifying which. But in the world of cognitive enhancement, good science regularly takes a back seat to speculative self-experimentation. Dwell on the science and it's liable to make you anxious: according to one study, a key ingredient in Brain TonIQ, dimethylaminoethanol, has been shown to decrease the average lifespan of aged quail. When you're trying to become superhuman at thinking, there are some things it's best not to think about.

The big conundrum at the core of the brain-enhancement debate is this: what counts as "getting smarter"? Many of the claims made by the industry aren't false, but rather boringly true: of course online training games "stimulate your cognitive functions" and "change your brain", since pretty much everything does. And nobody disputes that it's possible to learn new skills, such as speaking German, or riding a bike nor that taking a substance such as Modafinil or Adderall, now routinely deployed by some students as "study drugs", will temporarily supercharge your focus. It's also pretty easy – relatively speaking – to boost your working memory, for example by learning tricks to remember long strings of digits, as described by Joshua Foer in his bestseller Moonwalking With Einstein. But those tricks aren't transferable: ask a champion digit-memoriser to solve a cryptic crossword, and he'll probably do no better than the rest of us.

The holy grail is to find a way of increasing "fluid intelligence", our underlying capacity to hold information in conscious memory and then manipulate it in order to solve complex problems or come up with new ideas. Fluid intelligence is what IQ tests try to measure – albeit, historically, with all sorts of cultural biases – and the implications of improving it could be huge. "There are approximately 10 million scientists in the world," Nick Bostrom, of Oxford University's Future of Humanity Institute, told Time magazine a while back. "If you improve their cognition by 1%, the gain would hardly be noticeable. But it could be equivalent to instantly creating 100,000 new scientists." But even how to think about this in the first place is a tricky question, as the Imperial College neuroscientist Adam Hampshire points out, because "general intelligence" is a construct: it's an idea we use to group together certain aspects of brainpower, so it's unlikely to be related to just one aspect or system in the brain.

Until only six years ago, when it came to the possibility of increasing fluid intelligence, the verdict was almost uniformly pessimistic. But then, in 2008, a pair of workaholic psychologists from Switzerland, Susanne Jaeggi and her boyfriend Martin Buschkuehl, published a study that sent eyebrows shooting upwards, and that's still being fiercely debated today. "That study was the D-day invasion," says the science writer Dan Hurley, whose book Smarter: The New Science Of Building Brain Power will be published in the UK this month. "That really put down a marker that said: this is real. You can really do this."

The Jaeggi study relied on an especially vicious brain-training game known as the "dual n-back". You can try it for yourself at soakyourhead.com, but I can't recommend it, because it's hellish. "The first time they try it, everybody's impression is, 'Oh, this is impossible, this is crazy, this is awful,'" says Hurley. "It feels like someone just asked you to pick up a car." The game works like this: you hear a voice, slowly reciting a sequence of letters: "B… K… P… K…" Whenever you hear a letter that's the same as the one before last, you press the L key on your computer. So far, so tolerable – but at the same time, you're playing a visual version of the same game, in which one of a set of eight squares lights up in orange when the illuminated square is the same as the one before last, you press your computer's A key. Doing both these tasks at once feels savagely unpleasant, but if you make it through to the end, something worse is in store: on the next level, you do the same thing, except you're looking for matches two times before last. If you can make it to the next stage – looking for matches three times before last – you're probably a witch.

'I’ve been working through these drinks while wearing a NeuroSky MindWave headset, which claims to be monitoring my brainwaves via a sensor on my forehead.' Photograph: Christopher Lane for the Guardian

Jaeggi and Buschkuehl persuaded undergraduates at the University of Bern, and later other subjects, to submit to the dual n-back for several minutes a day, over weeks. They tested their fluid intelligence using Raven's Progressive Matrices, a widely respected test involving visual pattern manipulations. (Think of those old newspaper ads for Mensa, and you won't be far off.) What they discovered upended the conventional wisdom: after 19 days of training, their subjects recorded a 44% average performance boost on the Raven test. By then, the first generation of commercial brain games had been largely discredited: playing Dr Kawashima's Brain Training on your Nintendo, it's now clear, will only make you better at playing Dr Kawashima's Brain Training on your Nintendo. But playing the dual n-back, it appeared, could truly make people more intelligent.

There are few surer ways to create a firestorm among psychologists and neuroscientists, it turns out, than to claim such impressive changes to an aspect of intelligence long considered fixed. Some in the field compared the Jaeggi findings to cold fusion, which is as close as you can get to accusing a fellow academic of hallucinating while remaining minimally polite. Some prominently reported attempts to replicate the Jaeggi findings failed, but others found similar positive results in schoolchildren and the elderly. In 2013, a meta-analysis based on 23 studies found "no convincing evidence of the generalisation of working memory training to other skills", though there's been debate about the selection criteria involved. An earlier British study, conducted with the BBC show Bang Goes The Theory, reached similar conclusions, but didn't focus on the same kind of game. Interviews conducted for Hurley's book show the scientific establishment to be well and truly divided. It's all "a bit of a mess", Adam Hampshire says, due to the proliferation of numerous small-scale studies, which makes false positives far more likely: "If 1,000 people roll a dice 16 times, some of them are going to get just high numbers" – and those are the studies that get published.

"I know it sounds as if we're just pouring cold water on this, but the thing is, we've been disappointed so many times before," adds James Thompson, a senior honorary lecturer in psychology at University College London, and a prominent sceptic. "About 40 years ago, it was hyperbaric oxygen for pregnant women, so they'd give birth to geniuses. I got transcranial stimulation at Guy's hospital in 1969, as a guinea pig. But then you do the hard research and you don't see much difference."

Yet it would be very strange, ultimately, if it were to prove utterly impossible to modify your brain's basic capacities through any form of training. The brain is a physical organ, and its processes are physical processes why should the capacities we label "fluid intelligence" be uniquely immune to environmental impacts? Your intelligence is surely heavily influenced by your genes – but so (for example) is your height, and that can be affected by environmental factors, specifically how well you're nourished as a child. "Some people want to assert that it's unchangeable, as if that's hard science," Hurley says. "But it's actually a much more magical way of thinking about the mind to say that the environment can't possibly have any effect."

After four days of 20 minutes doing the dual n-back, I have no idea if it's working, but it's definitely hurting. Sadly, that's probably a good sign, and it's one thing on which researchers do tend to agree: if intelligence can be boosted by brain games – a very big if – they almost certainly won't be enjoyable ones. Unless the task involved keeps getting harder, so that you never quite feel you've got the hang of it, there's no way you'll get more intelligent. When you master a task, your brain becomes more efficient at performing it. And "efficiency is not your friend when it comes to cognitive improvement", as Andrea Kuszewski, a behavioural therapist trained in neuroscience, and a believer in the promise of intelligence-boosting, puts it. She points to studies of people playing Tetris, which showed an increase in cortical activity and cortical thickness as they struggled to get to grips with the game – but a decrease in both once they'd mastered it.

This is the closest thing you're going to get to a solid, science-backed piece of advice, when it comes to exercising your brain: don't let things get too fun. Once you're pretty good at sudoku, stop doing sudoku switch to something you're worse at. Keep seeking challenges that make your head hurt. Nobody ever said getting smarter was going to be easy.

There could be ways to become smarter more quickly, though – so long as you're willing, like Anthony Lee, to do slightly nerve-racking things with electricity. ("I'm not afraid to experiment," Lee says: as a child, he was always the one to accept dares. "But I'm a relatively responsible adult, so if I felt there was real danger, I don't think I'd do it." Then again, he adds, with a laugh, "I really don't understand all that much about electronics.") At a research lab in New Mexico a few years ago, according to a report in Nature, volunteers wearing small wet sponges on their temples played Darwars Ambush!, a soldier-training game sponsored by the US Defense Advanced Research Projects Agency (Darpa). Darwars Ambush! involves navigating virtual landscapes reminiscent of urban war zones, learning to spot hidden gunmen or deadly explosive devices. After just a few hours' training, players who'd been receiving a 2-milliamp current through the sponges on their heads showed twice as much improvement on the game than those getting a 20th of that.

The idea of electrically stimulating human bodies goes back at least to the 19th century, when it was used to cure "melancholy" much later, electroconvulsive therapy would be used to induce seizures in psychiatric patients. Since then, studies have demonstrated that a gentler approach, transcranial magnetic stimulation, can alleviate serious depression and perhaps even trigger bursts of "savant" intellectual prowess, reminiscent of the kind depicted in Rain Man. "How long," wondered the New York Times in 2003, "before Americans are walking around with humming antidepression helmets and math-enhancing 'hair-dryers' on their heads?"

The answer: one decade, if you count the Foc.us tDCS headset, now on sale in the US and UK. The Foc.us describes itself as an accessory for gamers, reportedly since it's easier to comply with medical regulations that way. But the implicit promise is the same as for the Darpa initiative, and Lee's home-based tinkering: by temporarily boosting cognitive capacity, tDCS might hugely speed up the learning process. It has also been shown, in one study, to induce "a feeling of anticipated challenge and [a] strong motivation to overcome it", which would presumably aid learning, too.

Precisely why tDCS works remains partly mysterious – though it's not enormously surprising that neurons, which transmit information via electrical signals, might do so faster and better with an electrical boost from outside. Dan Hurley quotes Roy Hoshi Hamilton, director of the Laboratory for Cognition and Neural Stimulation at the University of Pennsylvania: "What is a thought? A thought is what happens when some pattern of firing of neurons has happened in your brain. So if you have a technology that makes it ever so slightly easier for lots and lots of these neurons… to do their thing, then it doesn't seem so far-fetched that such a technology, be it ever so humble, would have an effect on cognition." Repeat the process enough times, and you'd expect the brain's neural pathways to change, too.

All of which is potentially dangerous, if you do it wrong. You might feel inclined to stick to brain games instead, on the rationale that even if they don't work, they can't do any harm. But that position's arguably misguided. Your time is finite, and every hour you spend wrestling with the dual n-back is one you could have spent doing any of the more mundane things that will certainly promote brain health: doing sufficient physical exercise, getting enough sleep, and preparing and eating healthy food. "Live a good clean life, get proper sleep and you'll be at the peak of whatever your potential performance is," James Thompson suggests. "And we use our intelligence to do specific tasks, so don't waste your time remembering numbers backwards – read a good statistics book. Learn about modern genetics. Read a history of intellectual discovery. Whenever people talk about spending 24 hours on the dual n-back, I think, well, yes, but what else could I do with 24 hours?"

I didn't spend 24 hours on the dual n-back, or even 12, but I did spend as long as I ever plan to, pumped up on Brain TonIQ or Brain Candy, both of which seemed to give me mild headaches. (I bought these drinks in the US, and not all are available in the UK: the Neuro range, including Neuro Passion and Neuro Sonic, has been temporarily withdrawn from British sale, because ingredients in some of the range don't have regulators' approval.)

After two weeks, I retook the intelligence test, based on the Raven matrices, and scored four points higher, at 133. Which proves absolutely nothing at all, though it did make me feel briefly smug.

I plan on never doing the dual n-back again, but I might take Andrea Kuszewski's advice and try turning off my smartphone's maps function, forcing myself to navigate the old-fashioned way. "Look, technology like GPS is great," Kuszewski says, "but there are always costs. If you used to walk to work but then you bought a car and you start driving everywhere instead, well, it'd be a lot easier. But everyone knows your body's going to suffer as a result! Why should it be any different with your brain?"

The long-sought secret of boosting intelligence could turn out to be straightforward – wherever possible, do things the harder way. I know, I know: it's not what I wanted to hear, either.


Study finds brain areas involved in seeking information about bad possibilities

Ilya Monosov, PhD, shows data on brain activity obtained from monkeys as they grapple with uncertainty. Monosov and colleagues at Washington University School of Medicine in St. Louis have identified the brain regions involved in choosing whether to find out if a bad event is about to happen. Credit: Washington University Photographic Services

The term "doomscrolling" describes the act of endlessly scrolling through bad news on social media and reading every worrisome tidbit that pops up, a habit that unfortunately seems to have become common during the COVID-19 pandemic.

The biology of our brains may play a role in that. Researchers at Washington University School of Medicine in St. Louis have identified specific areas and cells in the brain that become active when an individual is faced with the choice to learn or hide from information about an unwanted aversive event the individual likely has no power to prevent.

The findings, published June 11 in Neuron, could shed light on the processes underlying psychiatric conditions such as obsessive-compulsive disorder and anxiety—not to mention how all of us cope with the deluge of information that is a feature of modern life.

"People's brains aren't well equipped to deal with the information age," said senior author Ilya Monosov, Ph.D., an associate professor of neuroscience, of neurosurgery and of biomedical engineering. "People are constantly checking, checking, checking for news, and some of that checking is totally unhelpful. Our modern lifestyles could be resculpting the circuits in our brain that have evolved over millions of years to help us survive in an uncertain and ever-changing world."

In 2019, studying monkeys, Monosov laboratory members J. Kael White, Ph.D., then a graduate student, and senior scientist Ethan S. Bromberg-Martin, Ph.D., identified two brain areas involved in tracking uncertainty about positively anticipated events, such as rewards. Activity in those areas drove the monkeys' motivation to find information about good things that may happen.

But it wasn't clear whether the same circuits were involved in seeking information about negatively anticipated events, like punishments. After all, most people want to know whether, for example, a bet on a horse race is likely to pay off big. Not so for bad news.

"In the clinic, when you give some patients the opportunity to get a genetic test to find out if they have, for example, Huntington's disease, some people will go ahead and get the test as soon as they can, while other people will refuse to be tested until symptoms occur," Monosov said. "Clinicians see information-seeking behavior in some people and dread behavior in others."

To find the neural circuits involved in deciding whether to seek information about unwelcome possibilities, first author Ahmad Jezzini, Ph.D., and Monosov taught two monkeys to recognize when something unpleasant might be headed their way. They trained the monkeys to recognize symbols that indicated they might be about to get an irritating puff of air to the face. For example, the monkeys first were shown one symbol that told them a puff might be coming but with varying degrees of certainty. A few seconds after the first symbol was shown, a second symbol was shown that resolved the animals' uncertainty. It told the monkeys that the puff was definitely coming, or it wasn't.

The researchers measured whether the animals wanted to know what was going to happen by whether they watched for the second signal or averted their eyes or, in separate experiments, letting the monkeys choose among different symbols and their outcomes.

Much like people, the two monkeys had different attitudes toward bad news: One wanted to know the other preferred not to. The difference in their attitudes toward bad news was striking because they were of like mind when it came to good news. When they were given the option of finding out whether they were about to receive something they liked—a drop of juice—they both consistently chose to find out.

"We found that attitudes toward seeking information about negative events can go both ways, even between animals that have the same attitude about positive rewarding events," said Jezzini, who is an instructor in neuroscience. "To us, that was a sign that the two attitudes may be guided by different neural processes."

By precisely measuring neural activity in the brain while the monkeys were faced with these choices, the researchers identified one brain area, the anterior cingulate cortex, that encodes information about attitudes toward good and bad possibilities separately. They found a second brain area, the ventrolateral prefrontal cortex, that contains individual cells whose activity reflects the monkeys' overall attitudes: yes for info on either good or bad possibilities vs. yes for intel on good possibilities only.

Understanding the neural circuits underlying uncertainty is a step toward better therapies for people with conditions such as anxiety and obsessive-compulsive disorder, which involve an inability to tolerate uncertainty.

"We started this study because we wanted to know how the brain encodes our desire to know what our future has in store for us," Monosov said. "We're living in a world our brains didn't evolve for. The constant availability of information is a new challenge for us to deal with. I think understanding the mechanisms of information seeking is quite important for society and for mental health at a population level."


Computation Power: Human Brain vs Supercomputer

A supercomputer is a computer with a high level of performance compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS).

Since 2017, there are supercomputers which can perform up to nearly a hundred quadrillion FLOPS. Since November 2017, all of the world's fastest 500 supercomputers run Linux-based operating systems. Additional research is being conducted in China, the United States, the European Union, Taiwan and Japan to build even faster, more powerful and more technologically superior exascale supercomputers.

Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). Throughout their history, they have been essential in the field of cryptanalysis.

At the moment of writing this article the world's fastest supercomputer is Summit or OLCF-4, developed by IBM for use at Oak Ridge National Laboratory, the fastest supercomputer in the world, capable of 200 petaflops.

Each one of its 4,608 nodes (9,216 IBM POWER9 CPUs and 27,648 NVIDIA Tesla GPUs) has over 600 GB of coherent memory (6×16 = 96 GB HBM2 plus 2×8×32 = 512 GB DDR4 SDRAM) which is addressable by all CPUs and GPUs plus 800 GB of non-volatile RAM that can be used as a burst buffer or as extended memory.The POWER9 CPUs and Volta GPUs are connected using NVIDIA's high speed NVLink.

This allows for a heterogeneous computing model. To provide a high rate of data throughput, the nodes will be connected in a non-blocking fat-tree topology using a dual-rail Mellanox EDR InfiniBand interconnect for both storage and inter-process communications traffic which delivers both 200Gb/s bandwidth between nodes and in-network computing acceleration for communications frameworks such as MPI and SHMEM/PGAS.

Brains Are Very Different From Computers

Our miraculous brains operate on the next order higher. Although it is impossible to precisely calculate, it is postulated that the human brain operates at 1 exaFLOP, which is equivalent to a billion billion calculations per second.

When we discuss computers, we are referring to meticulously designed machines that are based on logic, reproducibility, predictability, and math. The human brain, on the other hand, is a tangled, seemingly random mess of neurons that do not behave in a predictable manner.

The brain is both hardware and software, whereas there is an inherent different in computers. The same interconnected areas, linked by billions of neurons and perhaps trillions of glial cells, can perceive, interpret, store, analyze, and redistribute at the same time. Computers, by their very definition and fundamental design, have some parts for processing and others for memory the brain doesn’t make that separation, which makes it hugely efficient.

The same calculations and processes that might take a computer a few millions steps can be achieved by a few hundred neuron transmissions, requiring far less energy and performing at a far greater efficiency. The amount of energy required to power computations by the world’s fastest supercomputer would be enough to power a building the human brain achieves the same processing speeds from the same energy as is required to charge a dim light bulb.

One of the things that truly sets brains apart, aside from their clear advantage in raw computing power, is the flexibility that it displays. Essentially, the human brain can rewire itself, a feat more formally known as neuroplasticity. Neurons are able to disconnect and reconnect with others, and even change in their basic features, something that a carefully constructed computer cannot do.

Notes and references


Pumped for action

But if that's true, how do we explain why Karpov grew too skinny to compete in his chess competition? The general consensus is that it mostly comes down to stress and reduced food consumption, not mental exhaustion.

Elite chess players are under intense pressure that causes stress, which can lead to an elevated heart rate, faster breathing and sweating. Combined, these effects burn calories over time. In addition, elite players must sometimes sit for as much as 8 hours at a time, which can disrupt their regular eating patterns. Energy-loss is also something that stage performers and musicians might experience, since they&rsquore often under high-stress, and have disrupted eating schedules.

"Keeping your body pumped up for action for long periods of time is very energy demanding,&rdquo Messier explained. &ldquoIf you can&rsquot eat as often or as much as you can or would normally — then you might lose weight.&rdquo

So, the verdict is in: Sadly, thinking alone won't make us slim. But when you next find yourself starved of inspiration, one extra square of chocolate probably won't hurt.


The Causes of Information Overload

Brain overload stems from a variety of factors, each of which arises from taking in new information. The mind has a limited capacity for attending to information at any given time and is inclined toward novelty in its environment. The combination of limited attention and seeking originality is problematic in our modern context where rapid exposure to information is ubiquitous through easy access to electronic devices and social media.

Despite the brain’s problematic disposition, brain overload isn’t guaranteed to happen because of an excess of information. According to a Pew Research Center survey titled “Information Overload,” 79% of respondents found that access to many kinds of information gave them a sense of control over their lives. The survey found that certain circumstances — and even certain institutions — can be what trigger the effects of overload. Fifty-six percent of respondents reported higher levels of stress caused by governmental agencies, schools, and banks because of the information gathering processes associated with them.

This data set makes sense considering Levitin’s definitional work. While it seems natural that most Americans would want access to updated and continuous information through their devices — smartphones, personal computers, and tablets — it’s also unsurprising that most respondents associated stress with the different kinds of information they receive. What’s more, a near majority of these respondents reported trouble with keeping up with the information they had access to. As these conditions will only persist as technological innovations continue, we might find solutions to the problem.


Forgetting uses more brain power than remembering

Summary: Intentional forgetting may require more attention to the unwanted information, rather than less.

Source: University of Texas at Austin

Choosing to forget something might take more mental effort than trying to remember it, researchers at The University of Texas at Austin discovered through neuroimaging.

These findings, published in the Journal of Neuroscience, suggest that in order to forget an unwanted experience, more attention should be focused on it. This surprising result extends prior research on intentional forgetting, which focused on reducing attention to the unwanted information through redirecting attention away from unwanted experiences or suppressing the memory’s retrieval.

“We may want to discard memories that trigger maladaptive responses, such as traumatic memories, so that we can respond to new experiences in more adaptive ways,” said Jarrod Lewis-Peacock, the study’s senior author and an assistant professor of psychology at UT Austin. “Decades of research has shown that we have the ability to voluntarily forget something, but how our brains do that is still being questioned. Once we can figure out how memories are weakened and devise ways to control this, we can design treatment to help people rid themselves of unwanted memories.”

Memories are not static. They are dynamic constructions of the brain that regularly get updated, modified and reorganized through experience. The brain is constantly remembering and forgetting information — and much of this happens automatically during sleep.

When it comes to intentional forgetting, prior studies focused on locating “hotspots” of activity in the brain’s control structures, such as the prefrontal cortex, and long-term memory structures, such as the hippocampus. The latest study focuses, instead, on the sensory and perceptual areas of the brain, specifically the ventral temporal cortex, and the patterns of activity there that correspond to memory representations of complex visual stimuli.

“We’re looking not at the source of attention in the brain, but the sight of it,” said Lewis-Peacock, who is also affiliated with the UT Austin Department of Neuroscience and the Dell Medical School.

Using neuroimaging to track patterns of brain activity, the researchers showed a group of healthy adults images of scenes and faces, instructing them to either remember or forget each image.

Their findings not only confirmed that humans have the ability to control what they forget, but that successful intentional forgetting required “moderate levels” of brain activity in these sensory and perceptual areas — more activity than what was required to remember.

“A moderate level of brain activity is critical to this forgetting mechanism. Too strong, and it will strengthen the memory too weak, and you won’t modify it,” said Tracy Wang, lead author of the study and a psychology postdoctoral fellow at UT Austin. “Importantly, it’s the intention to forget that increases the activation of the memory, and when this activation hits the ‘moderate level’ sweet spot, that’s when it leads to later forgetting of that experience.”

The researchers also found that participants were more likely to forget scenes than faces, which can carry much more emotional information, the researchers said.

“We’re learning how these mechanisms in our brain respond to different types of information, and it will take a lot of further research and replication of this work before we understand how to harness our ability to forget,” said Lewis-Peacock, who has begun a new study using neurofeedback to track how much attention is given to certain types of memories.

“This will make way for future studies on how we process, and hopefully get rid of, those really strong, sticky emotional memories, which can have a powerful impact on our health and well-being,” Lewis-Peacock said.

These findings, published in the Journal of Neuroscience, suggest that in order to forget an unwanted experience, more attention should be focused on it. This surprising result extends prior research on intentional forgetting, which focused on reducing attention to the unwanted information through redirecting attention away from unwanted experiences or suppressing the memory’s retrieval.

Source:
University of Texas at Austin
Media Contacts:
Rachel Griess – University of Texas at Austin
Image Source:
Neuroscience News image is in public domain.


What percentage of our brain do we use?

The brain is the most complex organ in the human body. Many believe that a person only ever uses 10 percent of their brain. Is there any truth to this?

A person’s brain determines how they experience the world around them. The brain weighs about 3 pounds and contains around 100 billion neurons — cells that carry information.

In this article, we explore how much of the brain a person uses. We also bust some widely held myths and reveal some interesting facts about the brain.

Share on Pinterest Studies have debunked the myth that humans use only 10 percent of their brain.

According to a survey from 2013, around 65 percent of Americans believe that we only use 10 percent of our brain.

But this is just a myth, according to an interview with neurologist Barry Gordon in Scientific American. He explained that the majority of the brain is almost always active.

The 10 percent myth was also debunked in a study published in Frontiers in Human Neuroscience.

One common brain imaging technique, called functional magnetic resonance imaging (fMRI), can measure activity in the brain while a person is performing different tasks.

Using this and similar methods, researchers show that most of our brain is in use most of the time, even when a person is performing a very simple action.

A lot of the brain is even active when a person is resting or sleeping.

The percentage of the brain in use at any given time varies from person to person. It also depends on what a person is doing or thinking about.

It’s not clear how this myth began, but there are several possible sources.

In an article published in a 1907 edition of the journal Science, psychologist and author William James argued that humans only use part of their mental resources. However, he did not specify a percentage.

The figure was referenced in Dale Carnegie’s 1936 book How to Win Friends and Influence People. The myth was described as something the author’s college professor used to say.

There is also a belief among scientists that neurons make up around 10 percent of the brain’s cells. This may have contributed to the 10 percent myth.

The myth has been repeated in articles, TV programs, and films, which helps to explain why it is so widely believed.

Like any other organ, the brain is affected by a person’s lifestyle, diet, and the amount that they exercise.

To improve the health and function of the brain, a person can do the following things.

Eat a balanced diet

Eating well improves overall health and well-being. It also reduces the risk of developing health issues that may lead to dementia, including:

The following foods promote brain health:

  • Fruits and vegetables with dark skins. Some are rich in vitamin E, such as spinach, broccoli, and blueberries . Others are rich in beta carotene, including red peppers and sweet potatoes. Vitamin E and beta carotene promote brain health.
  • Oily fish. These types of fish, such as salmon, mackerel, and tuna, are rich in omega-3 fatty acids, which may support cognitive function.
  • Walnuts and pecans. They are rich in antioxidants, which promote brain health.

There is a selection of walnuts and pecans available for purchase online.

Exercise regularly

Regular exercise also reduces the risk of health problems that may lead to dementia.

Cardiovascular activities, such as walking briskly for 30 minutes a day, can be enough to reduce the risk of brain function declining.

Other accessible and inexpensive options include:

Keep the brain active

The more a person uses their brain, the better their mental functions become. For this reason, brain training exercises are a good way to maintain overall brain health.

A recent study conducted over 10 years found that people who used brain training exercises reduced the risk of dementia by 29 percent.

The most effective training focused on increasing the brain’s speed and ability to process complex information quickly.

There are a number of other popular myths about the brain. These are discussed and dispelled below.

Left-brained vs. right-brained

Many believe that a person is either left-brained or right-brained, with right-brained people being more creative, and left-brained people more logical.

However, research suggests that this is a myth — people are not dominated by one brain hemisphere or the other. A healthy person is constantly using both hemispheres.

It is true that the hemispheres have different tasks. For instance, a study in PLOS Biology discussed the extent to which the left hemisphere is involved in processing language, and the right in processing emotions.

Alcohol and the brain

Long-term alcoholism can lead to a number of health problems, including brain damage.

It is not, however, as simple as saying that drinking alcohol kills brain cells — this is a myth. The reasons for this are complicated.

If a woman drinks too much alcohol while pregnant, it can affect the brain development of the fetus, and even cause fetal alcohol syndrome.

The brains of babies with this condition may be smaller and often contain fewer brain cells. This may lead to difficulties with learning and behavior.

Subliminal messages

Research suggests that subliminal messages can provoke an emotional response in people unaware that they had received emotional stimulus. But can subliminal messages help a person to learn new things?

A study published in Nature Communications found that hearing recordings of vocabulary when sleeping could improve a person’s ability to remember the words. This was only the case in people who had already studied the vocabulary.

Researchers noted that hearing information while asleep cannot help a person to learn new things. It may only improve recall of information learned earlier, while awake.

Brain wrinkles

The human brain is covered in folds, commonly known as wrinkles. The dip in each fold is called the sulcus, and the raised part is called the gyrus.

Some people believe that a new wrinkle is formed every time a person learns something. This is not the case.

The brain starts to develop wrinkles before a person is born, and this process continues throughout childhood.

The brain is constantly making new connections and breaking old ones, even in adulthood.


The human brain’s remarkably low power consumption, and how computers might mimic its efficiency

A new paper from researchers working in the UK and Germany dives into how much power the human brain consumes when performing various tasks — and sheds light on how humans might one day build similar computer-based artificial intelligences. Mapping biological systems isn’t as sexy as the giant discoveries that propel new products or capabilities, but that’s because it’s the final discovery — not the decades of painstaking work that lays the groundwork — that tends to receive all the media attention.

This paper — Power Consumption During Neuronal Computation — will run in an upcoming issue of IEEE’s magazine, “Engineering Intelligent Electronic Systems Based on Computational Neuroscience.” Here at ET, we’ve discussed the brain’s computational efficiency on more than one occasion. Put succinctly, the brain is more power efficient than our best supercomputers by orders of magnitude — and understanding its structure and function is absolutely vital.

Is the brain digital or analog? Both

When we think about compute clusters in the modern era, we think about vast arrays of homogeneous or nearly-homogeneous systems. Sure, a supercomputer might combine two different types of processors — Intel Xeon + Nvidia Tesla, for example, or Intel Xeon + Xeon Phi — but as different as CPUs and GPUs are, they’re both still digital processors. The brain, it turns out, incorporates both digital and analog signaling into itself and the two methods are used in different ways. One potential reason why is that the power efficiency of the two methods varies dramatically depending on how much bandwidth you need and how far the signal needs to travel.

The efficiency of the two systems depends on what SNR (signal to noise) ratio you need to maintain within the system.

One of the other differences between existing supercomputers and the brain is that neurons aren’t all the same size and they don’t all perform the same function. If you’ve done high school biology you may remember that neurons are broadly classified as either motor neurons, sensory neurons, and interneurons. This type of grouping ignores the subtle differences between the various structures — the actual number of different types of neurons in the brain is estimated between several hundred and perhaps as many as 10,000 — depending on how you classify them.

Compare that to a modern supercomputer that uses two or three (at the very most) CPU architectures to perform calculations and you’ll start to see the difference between our own efforts to reach exascale-level computing and simulate the brain, and the actual biological structure. If our models approximated the biological functions, you’d have clusters of ARM Cortex M0 processors tied to banks of 15-core Xeons which pushed data to Tesla GPUs, which were also tied to some Intel Quark processors with another trunk shifting work to a group of IBM Power8 cores — all working in perfect harmony. Just as modern CPUs have vastly different energy efficiencies, die sizes, and power consumption levels, we see exactly the same trends in neurons.

All three charts are interesting, but it’s the chart on the far right that intrigues me most. Relative efficiency is graphed along the vertical axis while the horizontal axis has bits-per-second. Looking at it, you’ll notice that the most efficient neurons in terms of bits transferred per ATP molecule (ATP is a biological unit of energy equivalent to bits-per-watt in computing) is also one of the slowest in terms of bits per second. The neurons that can transfer the most data in terms of bits-per-second are also the least efficient.

Again, we see clear similarities between the design of modern microprocessors and the characteristics of biological organisms. That’s not to downplay the size of the gap or the dramatic improvements we’d have to make in order to offer similar levels of performance, but there’s no mystic sauce here — and analyzing the biological systems should give us better data on how to tweak semiconductor designs to approximate it.

A neuromorphic chip. Most attempts at emulating the human brain have so far mostly revolved around recreating neurons and synapses with crossbar switches.

Much of what we cover on ExtremeTech is cast in terms of the here-and-now. A better model of neuron energy consumption doesn’t really speak to any short-term goals — this won’t lead directly to a better microprocessor or a faster graphics card. It doesn’t solve the enormous problems we face in trying to shift conventional computing over to a model that more closely mimics the brain’s own function (neuromorphic design). But it does move us a critical step closer to the long-term goal of fully understanding (and possibly simulating) the brain. After all, you can’t simulate the function of an organ if you don’t understand how it signals or under which conditions it functions. [Read: A bionic prosthetic eye that speaks the language of your brain.]

Emulating a brain has at least one thing in common with emulating an instruction set in computing — the greater the gap between the two technologies, typically the larger the power cost to emulate it. The better we can analyze the brain, the better our chances of emulating one without needing industrial power stations to keep the lights on and the cooling running.