Thursday, December 26, 2024

Elon Musk’s Neuralink KNEW its brain implant was likely to malfunction in its first human patient – but went ahead with the surgery anyway, shocking report claims

Must read

Elon Musk‘s Neuralink knew its brain implant was likely to malfunction in its first human patient, but went ahead with the surgery anyway, a new report claims. 

In January, the firm implanted a brain chip in its first patient, Noland Arbaugh, who is paralyzed from the shoulders down due to a 2016 diving accident. 

But during the surgery, Mr Arbaugh suffered a life-threatening condition that later caused ‘a number of threads retracted from the brain,’ Neuralink said in a blog update last week.

Now, a report by Reuters citing ‘five people familiar with the matter’ claims that this issue had been ‘known about for years’ from animal testing. 

Despite this, the firm deemed the risk low enough for a redesign not to be merited, the sources added.

In January, Neuralink implanted a brain chip in its first patient, Noland Arbaugh, who is paralyzed from the shoulders down due to a 2016 diving accident

Elon Musk's Neuralink knew its brain implant was likely to malfunction in its first human patient, but went ahead with the surgery anyway, a new report claims

Elon Musk’s Neuralink knew its brain implant was likely to malfunction in its first human patient, but went ahead with the surgery anyway, a new report claims

Neuralink is testing its implant to give paralyzed patients the ability to use digital devices by thinking alone – a prospect that could help people with spinal cord injuries.

The company said last week that the implant’s tiny wires, which are thinner than a human hair, retracted from a patient’s brain in its first human trial, resulting in fewer electrodes that could measure brain signals.

The signals get translated into actions, such as moving a mouse cursor on a computer screen.

The company said it managed to restore the implant’s ability to monitor its patient’s brain signals by making changes that included modifying its algorithm to be more sensitive.

The sources declined to be identified, citing confidentiality agreements they had signed with the company. 

Neuralink and its executives did not respond to calls and emails seeking comment.

Neuralink is testing its implant to give paralyzed patients the ability to use digital devices by thinking alone - a prospect that could help people with spinal cord injuries

Neuralink is testing its implant to give paralyzed patients the ability to use digital devices by thinking alone – a prospect that could help people with spinal cord injuries

The company said last week that the implant's tiny wires, which are thinner than a human hair, retracted from a patient's brain in its first human trial, resulting in fewer electrodes that could measure brain signals. The signals get translated into actions, such as moving a mouse cursor on a computer screen

The company said last week that the implant’s tiny wires, which are thinner than a human hair, retracted from a patient’s brain in its first human trial, resulting in fewer electrodes that could measure brain signals. The signals get translated into actions, such as moving a mouse cursor on a computer screen

The US Food and Drug Administration was aware of the potential issue with the wires because the company shared the animal testing results as part of its application to begin human trials, one of the people said.

The FDA declined to comment on whether it was aware of the issue or its possible significance. 

The agency told Reuters it would continue to monitor the safety of patients enrolled in Neuralink’s study.

Were Neuralink to continue the trials without a redesign, it could face challenges should more wires pull out and its tweak to the algorithm proves insufficient, one of the sources said.

But redesigning the threads comes with its own risks. 

Anchoring them in the brain, for example, could result in brain tissue damage if the threads dislodge or if the company needs to remove the device, two of the sources said.

The company has sought to design the threads in a way that makes their removal seamless, so that the implant can be updated over time as the technology improves, current and former employees say.

Neuralink’s post last week made no mention of adverse health effects to Arbaugh and did not disclose how many of the device’s 64 threads pulled out or stopped collecting brain data.

So far, the device has allowed Arbaugh to play video games, browse the internet and move a computer cursor on his laptop by thinking alone, according to company blog posts and videos. 

Neuralink says that soon after the surgery, Arbaugh surpassed the world record for the speed at which he can control a cursor with thoughts alone.

The company has sought to design the threads in a way that makes their removal seamless, so that the implant can be updated over time as the technology improves, current and former employees say

The company has sought to design the threads in a way that makes their removal seamless, so that the implant can be updated over time as the technology improves, current and former employees say 

It is common for medical device companies to troubleshoot different designs during animal trials and for issues to arise during animal and clinical testing, according to outside researchers and sources who have worked at Neuralink and other medical device companies.

Specialists who have studied brain implants say the issue of threads moving can be hard to solve, partly due to the mechanics of how the brain moves inside the skull.

Robert Gaunt, a neural engineer at the University of Pittsburgh, described the movement of the wires so soon after the surgery as disappointing but said that is not unforeseen. 

‘In the immediate days, weeks, months after an implant like this, it’s probably the most vulnerable time,’ he said.

In 2022, the FDA initially rejected Neuralink’s application to begin human trials, and raised safety concerns about the threads, Reuters exclusively reported last year.

Neuralink conducted additional animal testing to address those concerns, and the FDA last year granted the company approval to begin human testing.

Neuralink's post last week made no mention of adverse health effects to Arbaugh and did not disclose how many of the device's 64 threads pulled out or stopped collecting brain data

Neuralink’s post last week made no mention of adverse health effects to Arbaugh and did not disclose how many of the device’s 64 threads pulled out or stopped collecting brain data

The company found that a subset of pigs implanted with its device developed a type of inflammation in the brain called granulomas, raising concerns among Neuralink’s researchers that the threads could be the cause, according to three sources familiar with the matter and records seen by Reuters.

Granulomas are an inflammatory tissue response that can form around a foreign object or an infection.

In at least one case, a pig developed a severe case of the condition. 

Company records reviewed by Reuters show that the pig developed a fever and was heaving after surgery. 

Neuralink’s researchers did not recognize the extent of the problem until examining the pig’s brain post-mortem.

Inside Neuralink, researchers debated how to rectify the issue and commenced a months-long investigation, said the sources familiar with the events.

Ultimately, the company could not determine the cause of the granulomas, but concluded that the device and the attached threads were not to blame, one of the sources said.

Elon Musk’s hatred of AI explained: Billionaire believes it will spell the end of humans – a fear Stephen Hawking shared

Elon Musk wants to push technology to its absolute limit, from space travel to self-driving cars — but he draws the line at artificial intelligence. 

The billionaire first shared his distaste for AI in 2014, calling it humanity’s ‘biggest existential threat’ and comparing it to ‘summoning the demon.’

At the time, Musk also revealed he was investing in AI companies not to make money but to keep an eye on the technology in case it gets out of hand. 

His main fear is that in the wrong hands, if AI becomes advanced, it could overtake humans and spell the end of mankind, which is known as The Singularity.

That concern is shared among many brilliant minds, including the late Stephen Hawking, who told the BBC in 2014: ‘The development of full artificial intelligence could spell the end of the human race.

‘It would take off on its own and redesign itself at an ever-increasing rate.’ 

Despite his fear of AI, Musk has invested in the San Francisco-based AI group Vicarious, in DeepMind, which has since been acquired by Google, and OpenAI, creating the popular ChatGPT program that has taken the world by storm in recent months.

During a 2016 interview, Musk noted that he and OpenAI created the company to ‘have democratisation of AI technology to make it widely available.’

Musk founded OpenAI with Sam Altman, the company’s CEO, but in 2018 the billionaire attempted to take control of the start-up.

His request was rejected, forcing him to quit OpenAI and move on with his other projects.

In November, OpenAI launched ChatGPT, which became an instant success worldwide.

The chatbot uses ‘large language model’ software to train itself by scouring a massive amount of text data so it can learn to generate eerily human-like text in response to a given prompt. 

ChatGPT is used to write research papers, books, news articles, emails and more.

But while Altman is basking in its glory, Musk is attacking ChatGPT.

He says the AI is ‘woke’ and deviates from OpenAI’s original non-profit mission.

‘OpenAI was created as an open source (which is why I named it ‘Open’ AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft, Musk tweeted in February.

The Singularity is making waves worldwide as artificial intelligence advances in ways only seen in science fiction – but what does it actually mean?

In simple terms, it describes a hypothetical future where technology surpasses human intelligence and changes the path of our evolution.

Experts have said that once AI reaches this point, it will be able to innovate much faster than humans. 

There are two ways the advancement could play out, with the first leading to humans and machines working together to create a world better suited for humanity.

For example, humans could scan their consciousness and store it in a computer in which they will live forever.

The second scenario is that AI becomes more powerful than humans, taking control and making humans its slaves – but if this is true, it is far off in the distant future.

Researchers are now looking for signs of AI  reaching The Singularity, such as the technology’s ability to translate speech with the accuracy of a human and perform tasks faster.

Former Google engineer Ray Kurzweil predicts it will be reached by 2045.

He has made 147 predictions about technology advancements since the early 1990s – and 86 per cent have been correct. 

Latest article