Most Popular
1. It’s a New Macro, the Gold Market Knows It, But Dead Men Walking Do Not (yet)- Gary_Tanashian
2.Stock Market Presidential Election Cycle Seasonal Trend Analysis - Nadeem_Walayat
3. Bitcoin S&P Pattern - Nadeem_Walayat
4.Nvidia Blow Off Top - Flying High like the Phoenix too Close to the Sun - Nadeem_Walayat
4.U.S. financial market’s “Weimar phase” impact to your fiat and digital assets - Raymond_Matison
5. How to Profit from the Global Warming ClImate Change Mega Death Trend - Part1 - Nadeem_Walayat
7.Bitcoin Gravy Train Trend Forecast 2024 - - Nadeem_Walayat
8.The Bond Trade and Interest Rates - Nadeem_Walayat
9.It’s Easy to Scream Stocks Bubble! - Stephen_McBride
10.Fed’s Next Intertest Rate Move might not align with popular consensus - Richard_Mills
Last 7 days
THEY DON'T RING THE BELL AT THE CRPTO MARKET TOP! - 20th Dec 24
CEREBUS IPO NVIDIA KILLER? - 18th Dec 24
Nvidia Stock 5X to 30X - 18th Dec 24
LRCX Stock Split - 18th Dec 24
Stock Market Expected Trend Forecast - 18th Dec 24
Silver’s Evolving Market: Bright Prospects and Lingering Challenges - 18th Dec 24
Extreme Levels of Work-for-Gold Ratio - 18th Dec 24
Tesla $460, Bitcoin $107k, S&P 6080 - The Pump Continues! - 16th Dec 24
Stock Market Risk to the Upside! S&P 7000 Forecast 2025 - 15th Dec 24
Stock Market 2025 Mid Decade Year - 15th Dec 24
Sheffield Christmas Market 2024 Is a Building Site - 15th Dec 24
Got Copper or Gold Miners? Watch Out - 15th Dec 24
Republican vs Democrat Presidents and the Stock Market - 13th Dec 24
Stock Market Up 8 Out of First 9 months - 13th Dec 24
What Does a Strong Sept Mean for the Stock Market? - 13th Dec 24
Is Trump the Most Pro-Stock Market President Ever? - 13th Dec 24
Interest Rates, Unemployment and the SPX - 13th Dec 24
Fed Balance Sheet Continues To Decline - 13th Dec 24
Trump Stocks and Crypto Mania 2025 Incoming as Bitcoin Breaks Above $100k - 8th Dec 24
Gold Price Multiple Confirmations - Are You Ready? - 8th Dec 24
Gold Price Monster Upleg Lives - 8th Dec 24
Stock & Crypto Markets Going into December 2024 - 2nd Dec 24
US Presidential Election Year Stock Market Seasonal Trend - 29th Nov 24
Who controls the past controls the future: who controls the present controls the past - 29th Nov 24
Gold After Trump Wins - 29th Nov 24
The AI Stocks, Housing, Inflation and Bitcoin Crypto Mega-trends - 27th Nov 24
Gold Price Ahead of the Thanksgiving Weekend - 27th Nov 24
Bitcoin Gravy Train Trend Forecast to June 2025 - 24th Nov 24
Stocks, Bitcoin and Crypto Markets Breaking Bad on Donald Trump Pump - 21st Nov 24
Gold Price To Re-Test $2,700 - 21st Nov 24
Stock Market Sentiment Speaks: This Is My Strong Warning To You - 21st Nov 24
Financial Crisis 2025 - This is Going to Shock People! - 21st Nov 24
Dubai Deluge - AI Tech Stocks Earnings Correction Opportunities - 18th Nov 24
Why President Trump Has NO Real Power - Deep State Military Industrial Complex - 8th Nov 24
Social Grant Increases and Serge Belamant Amid South Africa's New Political Landscape - 8th Nov 24
Is Forex Worth It? - 8th Nov 24
Nvidia Numero Uno in Count Down to President Donald Pump Election Victory - 5th Nov 24
Trump or Harris - Who Wins US Presidential Election 2024 Forecast Prediction - 5th Nov 24
Stock Market Brief in Count Down to US Election Result 2024 - 3rd Nov 24
Gold Stocks’ Winter Rally 2024 - 3rd Nov 24
Why Countdown to U.S. Recession is Underway - 3rd Nov 24
Stock Market Trend Forecast to Jan 2025 - 2nd Nov 24
President Donald PUMP Forecast to Win US Presidential Election 2024 - 1st Nov 24

Market Oracle FREE Newsletter

How to Protect your Wealth by Investing in AI Tech Stocks

Artificial Intelligence and Genuine Stupidity

Companies / Technology Jul 17, 2014 - 10:10 PM GMT

By: John_Mauldin

Companies

By Patrick Cox

Editor, TransTech Digest and Transformational Technology Alert

In the in-depth article below, I discuss the recent headlines about a supercomputer supposedly passing Alan Turing’s “Turing Test.” I also trace out complex questions regarding how little we still know about the human brain.


You may have assumed, if you stumbled across the spate of headlines about the latest Turing Test, that computer scientists have reached an important milestone in the development of artificial intelligence. This assumption may be true, but not for the reasons being discussed in the press.

As you may know, a Russian supercomputer convinced judges at the Royal Society in London on June 9 that they were chatting via text with a 13-year-old boy. Technically, this qualified as passing the “Turing Test,” conceptualized by Alan Turing in his 1950 paper Computing Machinery and Intelligence.

Turing’s name alone brings weight to claims that a computer has successfully achieved the ability to think by impersonating a human. Alan Turing, along with Hungarian American scientist John von Neumann, formulated the central concepts that led to modern computers. Winston Churchill credited Turing as single-handedly contributing more to the Allied victory in World War II than any other person. The British mathematician and cryptanalyst designed early computers capable of decrypting Nazi messages that enabled numerous major Allied victories.

In Turing’s 1959 paper Computing and Intelligence, he proposed a game that could help determine if computers could think. If a computer communicating via written messages could convince more than 30% of judges that it was a human, it might indicate “thinking.” This is the accomplishment that was heralded in at least a dozen headlines like this one over the past couple weeks. Initial coverage seemed to imply that the event signaled the imminent arrival of science fiction-like artificial intelligence (AI) silicon people. Many referenced the specter of some sort of Skynet, the evil machine intelligence in the James Cameron Terminator movie franchise that was dedicated to eliminating and superseding humanity.

After a few days, however, more skepticism emerged about Eugene Goostman, the computer program that helped trick a third of the judges in the Turing Test. I say the program helped because the Russian researchers who designed the program gamed the system by telling the judges ahead of time that they would be talking to a 13-year-old Ukrainian boy who could barely speak English. This fiction lowered judges’ expectations about the language skills and cultural knowledge of the program, providing perfect cover for non sequiturs and misunderstandings that would have otherwise tipped off judges to the nature of the chat. And it worked.

Initially, the developers of the program put Eugene Goostman online here so anybody could test the program’s ability to pass as human. At the time of this writing, the program was not functioning. Amusing, the website displayed the words, “I’LL BE BACK,” referencing the famous Arnold Schwarzenegger catchphrase from the first Terminator movie about the rogue AI Skynet.

Over the space of a few days, however, media coverage of the test shifted to skepticism about Eugene Goostman, though not about the computer scientists’ ability to create AIs with true human abilities. Implicit in much of the discussion about AIs is that they will even attain sentience, though not just yet. This attitude isn’t surprising, of course, because a lot of people are predicting that conscious, not just intelligent, computers are just around the corner.

These predictions are based primarily on the ability of Moore’s Law to pack more and more transistors into computers. The theory is that when computers have as many transistors as the brain has biological switches, computers will be able to learn and achieve volition and self-awareness.

Personally, I don’t buy it. This notion that there is a parallel between transistor and neurons is, in my opinion, the result of computer scientists completely misunderstanding and underestimating the human mind. To be fair, I’m not accusing Alan Turing of making that mistake.

In fact, his paper is essentially speculation about how to achieve thinking machines and he left very much to future computer scientists. He also spent much of the paper dealing with what he believed would be inevitable objections to the concept of thinking machines. The following snippet is worth reading:

It will simplify matters for the reader if I explain first my own beliefs in the matter. Consider first the more accurate form of the question. I believe that in about fifty years’ time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning. The original question, ‘Can machines think?’ I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

Clearly, he was wrong. There is no “general educated opinion” about machines thinking. While machines can calculate extraordinarily well, nobody I know in computer science believes that computers do anything but follow complex instructions quickly.

I think there’s a lot about the current state of computers that would probably disappoint Turing. You can glean a lot from his thoughts about how to develop a computer capable of passing the test that now bears his name. What he proposed was not that programmers create a computer with an adult mind capable of conversation. Rather, he envisioned computers capable of learning that could be given general education equivalent to that which humans receive. He recognized that technology might not be able to provide the senses that people use to receive education but pointed at Helen Keller as an example of learning without sound or sight.

He wrote,

Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain. Presumably the child brain is something like a notebook as one buys it from the stationer’s. Rather little mechanism, and lots of blank sheets. (Mechanism and writing are from our point of view almost synonymous.) Our hope is that there is so little mechanism in the child brain that something like it can be easily programmed. The amount of work in the education we can assume, as a first approximation, to be much the same as for the human child.

Here we see just how wrong Turing was, despite his formidable genius. The reason, however, is not that he was wrong about computer technology. In fact, his vision of future developments in storage and hardware were amazingly accurate, though he was writing in 1950. Rather, I think that his error was misunderstanding the human mind, perhaps due to the primitive state of biological knowledge at the time.

His thesis that the child’s brain is so undeveloped “that something like it can be easily programmed” is so wrong, it’s humorous. Numerous computer scientists have tried to create computers that could, like children, absorb educational input and develop unified worldviews that could be communicated via language. We haven’t come close.

In fact, humans come from the womb with astonishingly developed “mechanism.” On the surface, we see instincts like the fear of falling and the needs for parental nurturing. Beneath the surface, there is far more, including the bases of grammar shared by all languages. If children are not taught languages, they will develop their own ideoglossias, often with complete and sophisticated grammars. Turing, however, grossly underestimated the abilities inherent in human DNA and proposed that the software of early consciousness “can be easily programmed.” Given the ease of that task, he assumes all that is left is to develop more sophisticated hardware.

As I have explained, the problem is mainly one of programming. Advances in engineering will have to be made, too, but it seems unlikely that these will not be adequate for the requirements. Estimates of the storage capacity of the brain vary from 1010 to 1015 binary digits. I incline to the lower values and believe that only a very small fraction is used for the higher types of thinking. Most of it is probably used for the retention of visual impressions, I should be surprised if more than 109 was required for satisfactory playing of the imitation game, at any rate against a blind man. (Note: The capacity of the Encyclopaedia Britannica, 11th edition, is 2 X 109.) A storage capacity of 107, would be a very practicable possibility, even by present techniques. It is probably not necessary to increase the speed of operations of the machines at all. Parts of modern machines, which can be regarded as analogs of nerve cells, work about a thousand times faster than the latter. This should provide a “margin of safety” which could cover losses of speed arising in many ways. Our problem then is to find out how to program these machines to play the (Turing Test) game.

This assumption, that all we need to achieve true machine intelligence is some simple program to emulate a child’s mind and a computer with as many switches as the brain, has been adopted by many today. The human mind, however, is unimaginably complex. Superficially, it might be described as a system of on/off switches, comparable to computers, but we don’t know where exactly those switches are, or how they work. Every cell in the brain containing DNA is, in itself, a system containing all the information used to create the entire individual. Besides the tens of thousands of protein coding genes, the part of the genome once dismissed as “junk” is now known to contain at least 4 million gene switches.

As we begin to unravel how the brain works on a cellular level, these biological systems prove themselves to be more and more complex. As such, I don’t believe we are anywhere near as close to artificial intelligence, or artificial consciousness, as many assume.

Eugene Goostman’s “intelligence” is really nothing more than a logic tree or expert system similar to IBM’s Deep Blue, the machine that beat Garry Kasparov at chess in 1996. Though that program and current chess programs are impressive accomplishments, they just play out possible scenarios based on the laws governing chess. Then, they choose the best move, given available information. It is a brute force solution to decision making, running through all possible answers until it finds the one that fits.

Programs that do this well are significant achievements. The programmers who write them deserve enormous credit. The result, however, doesn’t approach consciousness. Using a similar logic tree, Eugene Goostman analyzes language and chooses a response designed to imitate intelligence, but it obviously doesn’t have the ability to think as humans do and even higher-order animals. Turing, however, assumed that “by the end of the century” or 2000, we would have a firm grip on this challenge.

It’s not clear to me, in fact, that we will be able to create a real “artificial intelligence” using existing computer technologies. Quantum computers offer the hope of much faster and more sophisticated calculations, but, once again, calculations are not consciousness. Even if coders design an artificial intelligence capable of fooling 100% of judges 100% of the time, does that indicate human-like intelligence? This question would be easier to answer if we first had a reasonable understanding of our own intelligence.

It’s difficult to say where intelligence starts and stops even in biological systems. We know a plant lacks intelligence, as do most animals (or at the very least, we know biology has led them to have a very different kind of intelligence). We say human beings have intelligence, but judging by the fiasco that is our current state of affairs, there are days I’m not terribly comfortable with that statement.

That said, I believe that Eugene Goostman did accomplish something very important.

Before I explain what I think is the real impact of the Eugene Goostman experiment, allow me to speak briefly about human development. Humans are born replete with genetically based instincts, including the template of language. We see the same genetic capabilities in many animal species, though they are of course far less developed. Birds, for example, communicate all kind of information vocally, ranging from warnings to mating status. The specific details of those messages are influenced by the sounds that the birds hear from other birds, but the basic template is in their genes.

The same situation exists in humans. Learning to speak is the process of learning the details of a specific language. Babies, however, come from the womb with an instinctive understanding of profoundly complex grammatical rules and an innate drive to learn local language. If children aren’t taught a language, they create their own idioglossias. A variant of these spontaneous languages is known as cryptophasia, or twin speech.

Cryptophasias, by the way, don’t only occur among twins. There was a fascinating incident in a very rural part of Idaho where my father grew up. It involved two babies, one from an English-speaking family and the other from a primarily Japanese-speaking family. In the thinly populated farming community, the two neighboring families helped one another by taking turns caring for both infants.

Since the two babies couldn’t understand one another’s home languages, they simply created a new one. Nobody else could understand a word of it, and the kids refused to speak anything else as they grew older. Eventually, the families were forced to keep the children apart until they learned their parents’ languages.

These phenomena illustrate the incredible power of the “mechanism” that Turing assumed would be easy to replicate via computer code. Enormous efforts and resources, however, have been spent in numerous failed efforts to code a program that can learn to speak naturally and intelligently. This has been an enormous disappointment—not just for computer scientists who believe that the human mind can be easily replicated. Some linguists also reject the notion that humans are born with an operating system that predetermines much of our nature, including our proclivity to use language. I suspect their resistance is based on ideology, not science.

For those interested in this subject, I recommend the work of Harvard-based Canadian linguist and cognitive scientist Steven Arthur Pinker. Though much of his work is scholarly, The Language Instinct: How the Mind Creates Language is written for educated laypersons.

Bolstering Pinker’s views is the fact that the animals most like us, apes, are obviously born with a language template. Though they don’t use words as we do, all apes have specific vocalizations and gestures that signal a large number of social situations to their groups.

Groups of apes that are isolated from other members of their species nevertheless use very similar vocalizations. Pinker points out that isolated tribes of humans share the same basic rules of grammar that are used by all humans. There has never existed a group of humans that did not have complex language, so it’s hard to imagine that language is not part of our programming rather than an invented technology.

The mechanism or genomic operating system, however, goes far beyond verbal language. We tend to take our many human instincts for granted because they are the water we swim in. I’ll give you a simple example.

When my daughter was born, I found that the scientific literature about the differences between boys and girls was essentially true. Specifically, girls have better fine motor control and physical skills, at least initially. Later, boys catch up with girls. So I was amazed that, within days of her birth, my daughter recognized the faces of her family. She would look straight into our eyes and give us a recognition smile. Strangers, however, caused her considerable anxiety. She turned away from people she did not know and refused to meet their eyes.

When she was only a few weeks old, I had her on my lap in front of the television. I turned the channel to a Spanish-language variety show as the commentator introduced a very popular musical act. My weeks-old daughter had never heard nor seen applause. Nevertheless, as soon as the crowd on the television began to applaud, she immediately began to clap. I still contemplate this fully developed and rather mysterious clapping instinct, clearly written in the code of her DNA.

When Turing wrote his paper, very little was known about genetics. It wasn’t until 1953 that James Watson and Francis Crick discovered the double helical structure of the DNA molecule. More importantly, we are only beginning to understand the complexity of DNA information storage and processing today.

It was not that long ago when geneticists confidentially referred to most of the genome as “junk,” because it did not encode proteins. Today, we’ve identified at least 4 million gene switches in that junk, found in nearly every one of the hundred trillion cells in the human body. There are at least 20,000 genes that create proteins that trigger specific actions, but the genes and the rest of the genome interact in ways that increase complexity to nearly incomprehensible levels.

The bottom line is that the task of creating the computer equivalent to a human brain is vastly more difficult than Turing or his followers imagined. We can’t say that it won’t be possible—perhaps hundreds of years from now—to create a computer that duplicates the complexity of the human genome along with the ability to think and learn. That said, I believe that the creation of the Eugene Goostman program that won the Turing does actually signal an important step in computer technology.

I suppose we’re stuck with the term AI. While I don’t think computers are any more intelligent than complex clockworks, the term is widely used. In fact, the Eugene Goostman program actually represents a breakthrough in UI, the initialism for “user interface.”

Most people—especially younger people—confuse advances in hardware with advances in the user interface. Yes, computers are smaller and faster now, but the basic physical functionality of computers today is essentially identical to what it was when the first PCs hit the market. The real change has come in the way we communicate with computers.

It is an interesting coincidence that the emergence of a Turing Test winner came about almost exactly 20 years after Microsoft dropped support for DOS. DOS, for you newbies, was the command line interface that we used in the old days. If you don’t remember DOS, this article will give you a good idea of what it was about. DOS is the acronym for “disk operating system,” which was written by Gary Kildall, whom I knew personally. (Interestingly, the Bill Gates/Microsoft fortune is a direct result of Kildall’s lack of business acumen.)

Personally, I miss DOS. Command line UIs, for those who are willing to put in the work and learn complex instruction sets, are enormously powerful and fast. There are few people who fit that description though, so it was essential that easier ways of interacting with computers were developed.

This is why from the beginning, Bill Gates and other innovators have sought the holy grail of human language interface. If Turing’s assumptions about the ease of writing a human mind emulator had been accurate, we would already be talking to our computers the way Joaquin Phoenix’s character did the in the film Her. Of course, the downside of truly intelligent computers is that we might actually have to worry about the James Cameron Skynet singularity scenario. Somebody at Google Voice, incidentally, has an interesting sense of humor.

The Invisible Robots Invade

Google Voice, however, is a good example of the kind of AI that the Eugene Goostman program represents. Like Siri, it’s an extremely sophisticated user interface based on a complex logic tree, which is not dissimilar to the flow charts you learn in basic programming classes. This is not thinking; it’s the formalized recreation of thinking or logic in some computer language.

Even on your personal computing devices, AIs are designed to anticipate your decisions and offer possible solutions using logic trees. Some can even “learn,” by replicating the patterns of your behavior.

AIs anticipate search queries whenever you use Google, Bing, Yahoo, or other search engines. Another relevant example of a complex logic tree AI is the translation program. Despite the fact that they contain massive amounts of information about many different languages, including vocabulary and grammar, these translation AIs do not think in the biological sense. Of course, this does not make them any less useful.

This is a simple example of a flowchart logic tree provided by Wikipedia.

While this example is intuitively obvious, the technique can and has been applied to seriously complex problems. If the economic incentive exists, a flowchart can be created in software form for almost anything. These are often called expert systems because they integrate information from experts. For example, programs have been created that allow an operator to deal with problems encountered in undersea drilling operations.

The computer asks a series of questions about the situation, following a logic tree that attempts to distill the knowledge accumulated by the most knowledgeable experts. As questions are asked and answered, enough information is accumulated for the expert system to offer a diagnosis and solution.

An obvious use of this technology is in medical diagnosis. One hurdle for medical diagnostic programs is that they could be considered medical devices that require approval by the FDA and other regulators. A system designed to test every possible medical condition, however, would be impossible to test in clinical conditions.

Moreover, it’s probably impossible to create an infallible expert system because programmers are human and cannot foresee every eventuality. So experienced human involvement would be required for optimal results. Nevertheless, we’re seeing diagnostic AIs increasingly used, though they are specifically sold as administrative aids to physicians. Isabel, founded by a couple after a near-fatal misdiagnosis of their daughter, is a good example.

Critics say that these programs cannot duplicate the success of diagnosticians like Dr. Gurpreet Dhaliwal, but that’s not the point. Dhaliwal is a well-known genius in his field; widespread use of a system that could approach his success rate would save millions of lives every year.

In fact, Dhaliwal’s success rate is based on his grasp of information that will be considered cripplingly limited within a few years. As more and more personalized genomes are being analyzed and matched to medical records, a revolution is taking place in medicine.

Today, we must wait for genetic proclivities to show themselves as symptoms. When enough genomes are matched to medical records, we’ll gain the ability to predict disease with almost unimaginable accuracy. Real-time monitoring of the proteins expressed by genes in the blood will take diagnostics to an even higher degree of accuracy.

The leader in this revolution, I believe, is Dr. Eric Schadt. Schadt has access to tens of thousands of medical records with sequenced genomes through the Mt. Sinai system. When we launched this service, Dr. Schadt generously spoke to me on video about the impact that genomics will have on medicine. If you haven’t seen the video, I recommend it.

Robotics will play a major role in this revolution, though not in the way that most people assume. This is because there is no clear definition of a robot. At least, there’s a huge gap between the common use of the word and the way it’s used among robotic scientists.

Certainly, there are people who see robots in the way they are portrayed in many science fiction shows and movies. Here’s a good example of this genre created by the National Institute of Advanced Industrial Science and Technology in Tokyo. More people, I suspect, understand that robots don’t need to appear particularly humanoid. Rethink Robotics’ first effort at an adaptable and relatively low-cost robot is one example. Even the Roomba, iRobot’s successful product line, is widely recognized as a robot. (I want one of these but haven’t yet convinced myself that it’s worth the cost of scaring the cats.)

As an investor, the robotics company that interested me most was Schaft, which recently pulled off an Optimus Prime-worthy trampling of the competition at the US military’s robo-olympics. Google has acquired the company, however. This ends pure-play investor hopes, but it may be good news for Google, which plans to use the company’s expertise to enter the consumer robotics markets. 

Robots capable of taking care of all your cleaning and laundry chores are not that far away. One reason that the Japanese remain leaders in robotics is that the country has known for a long time that its depopulation was leading to a healthcare crisis. There aren’t enough young people willing to work as caregivers for a growing population of infirm elderly. Robotics will help solve that problem.

The reach of robotics into the new economy, however, goes far beyond these recognizable robots. On a purely technical level, any AI-controlled physical mechanism is a robot. Few would mistake Schaft robots for anything else, but most people don’t know that we are, in fact, already surrounded by invisible robots.

Modern cars, for example, contain dozens of microchips that run AI programs to control various mechanical and electronic systems. These are robots. Your smartphone contains numerous AIs that control physical components of your device, qualifying them as robots as well.

As algorithmics (the science of software) improves along with computer power, the ability to create increasingly useful robots will continue to accelerate. I think it’s inevitable that over time, invisible robotic AIs will replace the operating systems that control our computing devices. Instead of having a one-size-fits-all OS, you’ll have an AI capable of learning the way you live and work to optimize the usefulness of your technologies.

The biggest impact, however, will be in biotech. Today, there are genius diagnosticians like Dr. Gurpreet Dhaliwal who can sort through available information to identify nearly all diseases. The genomics revolution, however, will change that.

Though genomics will allow better and earlier diagnoses—enabling true personalized medicine—the complexity of the genome means that no individual will be able to learn and remember the impact of tens of thousands of genotypes or their many, many combinations. Only medical AIs can fill that role.

Moreover, the cures for many diseases will not come from drugs or surgery. Increasingly, new cures are going to come from our own cells. This is regenerative medicine.

I’ll give you one brief example. I’m lactose intolerant. This could be cured easily with a simple transplant of the cells that manufacture lactase. To prevent immune rejection, they would have to have my DNA. This means that some of my cells would have to be converted to induced pluripotent stem (iPS) cells, which would then be engineered to become lactase-producing cells.

I’ve already had skin cells converted to iPS cells, which are functionally identical to the embryonic cells that I came from. They were then converted to an adult cell type. So this can be done now, but it’s a slow and somewhat expensive procedure. What is needed for wide-scale usage is a robotic process that would take patients’ skin cells, convert them to iPS cells, and then nudge them into whatever cell type is needed to treat disease or reverse aging.

The robotic machine designed to do this would look nothing like robots as most people imagine them. Mostly likely, it would be indistinguishable on the surface from many other advanced laboratory tools. Cells would go in one side, and a different cell type would come out the other side a few weeks later. It would undoubtedly act as a conveyor system with many different individuals’ cells being processed in parallel.

If this seems like a science-fiction scenario, a company in our portfolio has already started discussions with an important medical robotics company. In time, we will all have our cells banked and our sequenced genomes stored securely in the cloud. The diagnostic tools in your mobile device will tell the system to begin making cellular replacements and improvements even before they’re needed. This convergence of sciences is called “regenomics,” and AIs and robotics will play a critical role in its development. That’s why I’m happy that the Turing Test has been passed, even though it doesn’t mean what most people think it means.

For transformational profits,

Patrick Cox

From Patrick’s Research Team: To sign up for Patrick’s free, 6-days-a-week roundup of the biggest stories in tech, click here to visit his Transformational Technologies homepage. At the site, you can sign up in the box at the top right. Thanks for reading.

The article TransTech Digest: Artificial Intelligence and Genuine Stupidity was originally published at mauldineconomics.com.
John Mauldin Archive

© 2005-2022 http://www.MarketOracle.co.uk - The Market Oracle is a FREE Daily Financial Markets Analysis & Forecasting online publication.


Post Comment

Only logged in users are allowed to post comments. Register/ Log in