Machine Intelligence is not Artificial - An Introduction
Exploring the past to create the future.
I’m writing a book (and making a movie)
For the past few years, I have been exploring the history (and pre-history) of artificial intelligence (AI) and the broader field of what Norbert Wiener called cybernetics, Alan Turing called machine intelligence, and Pamela McCorduck called “intelligence outside the cranium”.
I have been privileged to work with a couple of Carnegie Mellon University (CMU) professors we develop a documentary on McCorduck’s 1979 book Machines Who Think, which has led me through the CMU and MIT archives and beyond - to an amazing world of the history of thinking about thinking machines.
Beyond the historical fascination, I have become convinced that much of what was left behind in an industry that has largely looked only forward may be relevant towards our societal efforts towards machine intelligence not only in the future, but today. I’ve decided to start sharing what I am finding as I go to gather interest, feedback and support as I do the slow dive into this topic and develop a more formal long-form consideration of how we can move beyond the artificial to a truly intelligent machine.
My why and personal journey to the past
Why am I heading down this rabbit hole (or more appropriately this interconnected network of rabbit holes)? This is a question I have been asked, and continue to ask myself periodically, as it has moved from curiosity to deep interest and avocation to semi-vocation over the last few years. Certainly, there is a foundation for it in my background (early exposure to AI in the 80's as a kid, keen interest in the Dan Dennett and Douglas Hofstadter variety of AI and philosophy of mind as I pursued more formal education in the neural sciences in the 90s/00s, and systems thinking approach to the neurological, psychological, and cognitive intersections of TBI and PTSD in the 2010s). More recently, I moved into the health tech space several years ago and began working with formal applications of AI in my current role as Chief Scientific Officer at Equideum Health (e.g. "Blockchain-orchestrated machine learning for privacy preserving federated learning in electronic health data" 2020). It should be noted that while this MIINA exploration has primarily been a side project, our Equideum Health CEO/Founder Heather Leigh Flannery has both encouraged it, seeing the long-term value in line with our company's goals, and been one of my many co-discussants and informal collaborators on this journey.
Diving more deeply into the literature and practice of AI, with the help of colleagues far more expert than me, over these past several years I had begun to sense conceptual gaps between the 'intelligence' related disciplines (tech, psych-, neuro-, philosophy, mathematics/logic, etc.) along with major process differentiation between engineering on one hand and the biological and psychological sciences on the other. This was further compounded by semantic confusion that often went undetected (e.g. neuron or neural means something very different to a computer engineer and a modern neuroscientist - with a complexity factor of at least 10^20 between them) and cultural differences that made sorting any of this out highly unlikely in the day to day.
In mid-2021 I began a perpetual conversation with Amicia D. Elliott, Ph.D., another neuroscientist who has sojourned across academia, government and the tech industry. We quickly began to articulate some of the shared sense of cross-disciplinary challenges we saw and coined the term Neuroboros, from the Greek term 'ouroboros' for the symbol of the serpent eating its own tail, for the persistent computer-as-brain-as-computer co-dependent model we saw across the brain and computer sciences. We presented on the 50+ year background and some of the reasons for this, along with survey results from our fellow scientists and engineers on the current perception, at the Society for Neuroscience (SfN) meeting in November of 2021. We planned and continue to explore this idea as a series of podcasts (which parallels some aspects of the series) and some developing written and experimental work. My somewhat narrower focus here in the MIINA series (yes, narrower, our overarching Neuroboros vision is considerably more expansive) continues to be interwoven with that work and influenced by our ongoing dialogue. Ami should be credited for influencing much of what I get right here and blameless for what I get wrong.
What happened next was as simple as a trip to the library and a row down the river. I got a bunch of books on the history of computing, AI, psychology, neuroscience, and other 'intelligence' related disciplines out of the library and began both reading for context and rapid cross-index scouring for alignments. In addition to strengthening my foundation of knowledge and priming me on the most cross-connected names and events, two books in particular shoved me down into this rabbit hole turned rabbit cave network. The first was Possible Minds: 25 Ways of Looking at AI, a 2019 collection of essays edited by Jon Brockman. Without even looking more closely at the organizing principle of the essays in the book (they were focused on Norbert Wiener's 1950 book The Human Use of Human Beings on the ethics of what was then still 5 years away from being called AI), I turned to the essay by Daniel Dennett - a philosopher I had long enjoyed and even got to meet once when I took a grad school Philosophy of Mind class as an undergrad. Dennett continued to influence me as I read of his early reading of Wiener's book and its precursor, Cybernetics. Cybernetics was a term I felt I knew the meaning of (you know, computers and stuff) but really did not. I began to realize that there was a major gap in my knowledge of the history of the intersection of neuroscience and computing. Furthermore, here was an entire book of essays, by some of the top thinkers and practitioners in the field of AI, talking about some guy I had barely heard of, a science that was seemingly lost to history, and a book that warned of many of the ethical challenges with AI we are worried about today - but written in 1950! I began using cybernetics and Wiener as a key node in my cross-indexing this tangled history of people who think about machines that think, and things began to make a lot more sense.
Fast forward to the summer of 2022, I had a more slow-burn of an Aha! moment with the second book I mentioned above, Pamela McCorduck's 1979 Machines Who Think. It was simply light-years better than anything else I had read on the topic of AI history - its detail, depth, and readability, but also its framing and historical context. I had gone through dozens of books on the history of AI, from technical to conceptual to biographical, and here was one that gave you all of the context and more for where AI came from and didn't seem like it was written as technical manuscript expanded to book length. Plus, there was truly unique detail that jumped out. In one exchange with John McCarthy, who gets unquestioned credit for coming up with the term AI in pretty much every other source I have seen, she quotes him as saying he may have borrowed it from somewhere else (and now having listened to the full set of interviews she had with McCarthy, whom she'd known professionally and socially for almost a decade at that point, I can see why she left that nugget in the book). Here was a person who had functional knowledge of the topic and access to the individuals from the Dartmouth meeting (she had assisted Feigenbaum and Feldman on the first textbook on AI published in 1963, hung out with McCarthy and others at Stanford in the 60's, sipped sherry weekly with Herb Simon, hosted a regular salon (the Squirrell Hill Sages) with Simon, Newell and others from CMU in the 70's, and was emailing [yes emailing] with Marvin Minsky at MIT in the 1970s as well) and as an English professor at University of Pittsburgh, knew how to craft a narrative and write well. This realization had me buying, giving away, and discussing a book that was more than 40 years old with almost the same excitement as my own.
I had also picked back up the sport of rowing in 2022, and it was at coffee after a morning row when one of my fellow Steel City Rowing Club members, who happens to be an English professor at CMU, mentioned that she was talking to another CMU professor about a documentary on a book relating to the history of AI. The book had been funded in part by CMU back in the 1970s - yet almost no one at CMU knew who she was. Turns out they had been able to interview her to talk about the book and its history not long before she passed away in October of 2021. They were now conducting other interviews and wanted someone to start going through the archives she had donated to CMU, including the tapes and transcripts of all of the interviews she had done in 1974-75 with the attendees of the 1956 Dartmouth meeting along with many other early AI practitioners. I jumped at, or maybe was shoved into, that cavern and have been happily tumbling through it since.
What has been most remarkable has not only been the historical context and personal insights I have found in those files and in the ever-growing associated information I have been sorting through, but also the deliberation with which McCorduck intended this to be the case almost 50 years ago. In her detailed proposal from March of 1974 to the Office of Naval Research (ONR, who administered ARPA funds at the time) for the oral history project on AI that also produced the book, she describes her intent for the archives to be made available for the future research. While the ONR opted not to fund her project (they instead funded a more technical book on the topic by Nils Nilsson a couple years later); the project, eventually funded by CMU and MIT, was obviously well thought out in advance both in how it would be done and the impact it would have. Coincidentally, she proposed the project in early 1974 as the first AI funding winter was settling in (CMU had had its funding cut by ARPA) and the subsequent book was published in 1979 prior to the end of that first AI winter in 1980. I'm not saying McCorduck single handedly ended the first AI winter, but the "godfather" of AI Geoffrey Hinton, who got his PhD in AI in 1978 and began teaching at CMU in 1982, may owe some indirect thanks to this "godmother" of AI for his early funding being turned back on.
Beyond all of this historical importance, there has also been a nagging and increasing sense, that similar to the inter-disciplinary gaps between the brain and computer fields I mentioned before, there may be something important missing, lost, or left behind. More on that below.
All of this makes the short answer for why I am doing this, to put it simply, how could I not?
Why you (should) care
"That's great, Sean. Glad you found another purpose in life. Why should we care?"
Better science. Cheaper research. Faster (medical) miracles (i.e. 10,000x faster clinical decision support).
Or more simply: Quadrillions ($1,000,000,000,000,000s) in value from quality adjusted life year improvement worldwide.
And that's only in healthcare and life sciences research.
I've run the numbers for my industry (and previously run ROI studies on health research impact on health outcomes for DoD, complex work with lag and attribution, but it can be done), and the value of getting these solutions right from a health outcomes perspective will be huge to healthcare and life sciences. Meanwhile, the impact across industries from rapid advances in machine intelligence (not just AI) would be almost incalculable. And the impact to humanity - endless. And yes, there are all those various and sundry existential risks we hear about with AGI. But that's part of the value of looking for what was left behind. Recall the founder of cybernetics, Norbert Wiener, was cautioning about the ethical concerns of where things could go with government surveillance, corporate manipulation, and impact to human work, purpose and livelihood of AI half a decade before AI was even a thing.
The interdisciplinary nature of cybernetics and early machine intelligence not only had many of the ethical considerations already considered at the conceptual stage, rather than today's narrow engineering approach to AI viewing ethics as something to hand wave about or staple on later, but there were also functional possibilities - analog-digital hybridization, complex intelligence sub-function integration, truly neuromorphic neural nets and (not or) symbolic approaches to complex information processing - that may have been left behind or unexplored. Revisiting these early complex (and admittedly hard) discussions, aligning advanced, integrated brain and cognitive sciences with different approaches in both neuromorphic hardware and software, could give us rapidly amplified capabilities in machine intelligence.
In short there may be aspects of the past that have been left on the cutting room floor that can help us get to machine intelligence (or AGI if you prefer the qualifier) sooner, and if we get it right, we get better, cheaper, faster EVERYTHING.
What's next
We have many places to go with this series. These places will take some time to get to. I’ll begin with six areas I have already begun touching on (for those who follow me on LinkedIn, these are mildly edited versions of the articles I have posted there from late December 2023 to February 2024 - links below if you want to skip ahead). Briefly, the topics are:
Machine Intelligence is not Artificial (MIINA) Part 1 - An introduction to my hypothesis that machine intelligence and artificial intelligence may not be synonymous, some early conceptual and interdisciplinary aspects of machine intelligence were left behind, and a closer look at several key events that may have contributed to the fracturing and simplification of focus in AI.
MIINA Part 2 - A closer look at Allen Newell's 1957 thesis as he dissects early "information processing" efforts into 5 groups (1 - engineers, 2 - AI engineers, 3 - cyberneticists, 4 - cyberneticists + digital computer analysts, and the 5 - information processing group) in the wake of the 1956 Dartmouth meeting on AI; and a glimpse at a where each of these groups may have advanced to over the last 60+ years.
MIINA Part 3: In the Beginning - A look back at Pamela McCorduck and her 1979 book Machines Who Think drawn from her oral history of AI and its early practitioners (the archives of her interviews with these individuals have been a key source of material), along with a timeline of some of the key publications and events from the earlier pre-AI era 1936-1956 of machine intelligence.
MIINA Part 4: Cybernetics and Norbert Wiener - An overview of cybernetics and its early years in the United States, an introduction to Norbert Wiener who was one of the first founders of this "science of communication and control in animals and machines", and a review of the early series of cybernetics related meetings in the U.S. hosted by the Macy Foundation and others.
MIINA Part 5: The Ratio Club and British Cybernetics - A review of the Ratio Club, a group of British cyberneticists who met 37 times between 1949 to 1955 to tackle 28 questions relating to human intelligence and its potential machine counterparts, leading to, among other things, one of its members Alan Turing publishing what is often considered the seminal piece on machine intelligence in 1950, "Computing Machinery and Intelligence."
MIINA Part 6: Dartmouth 1956, the Birth of AI and the Balkanization of Machine Intelligence - An investigation of what transpired between what is often treated in isolation as the two pillars of the beginning of AI - Turing's 1950 paper and the 1956 Dartmouth meeting, leaning heavily on the work of Ronald Kline and the first-person interviews Pamela McCorduck conducted with attendees of the Dartmouth meeting we begin to see more detail on the siloing of different machine intelligence efforts Newell described in his 1957 dissertation, the factors that led up to the narrowing of the aperture (and attendance) of the Dartmouth meeting, and the resulting Balkanization of the various efforts that has largely persisted for more than 60 years to the present day.
As I have moved through the background research and drafted the initial essays, it is clear that there are many additional topics I will need to dive into to fill in my understanding and explanation. These will begin with:
Brain complexity - most people outside of neuroscience don't quite grasp the complexity of the brain, we are barely beginning to within neuroscience. Given that most artificial neural networks, on which much of today's AI success is generated, use only a binary on/off neuron as their model, we are missing a considerable amount of complexity, including the digital/analog hybrid component of neurons/glia in the brain.
Psychology of intelligence - Intelligence in humans is a poorly understood thing, and yet what we do know has fathoms of complexity that simply haven't been taken into account by much of the AI field. There are dozens of cognitive functions and sub-functions at play, not simply the handful of isolated set of functions most AI has been focused on as low hanging fruit for nearly 70 years. And in the brain, these are all integrated - with integration being a likely barrier to entry for any real intelligence which AI has barely begun scratching the surface on.
Consciousness and philosophy of mind - The elusive "hard problem" of consciousness relating to human brains and cognition might seem to be merely tangential to discussion of machine intelligence. However, it may be instructive to head into this territory and may conversely be an area that benefits from our explorations of non-human intelligence.
Soviet cybernetics and the Cold War - After initially denouncing cybernetics as American capitalistic imperialism, the Soviets had a change of heart and embraced it by the late 1950s. A tremendous amount of work on the topic continued to be done behind the iron curtain that needs to be investigated (ongoing on my part, though others have certainly looked broadly at this work). Much of the Soviet cybernetics from the 1960s onward included the second-order cybernetics approach of integration with the social sciences, so this gets more complex. It will also be instructive to look at how Cold War competition influenced both cybernetics and artificial intelligence funding and findings in different places influenced by the variety of forces at work.
2nd order cybernetics and the 1960s - Beyond Soviet cybernetics (and other later phase global pockets of interest), the 1960s saw an increase in interactive, mutual influence between cybernetics and major social developments in the US and Western Europe. While some of these developments making for fascinating asides of history, others are crucial to the understanding of where the early, broad thinking about human and machine thinking advanced, where it stalled, and where it diffused into other areas.
There will be other areas of focus, as my research is ongoing, and my thinking is still resolving based on what I am finding.

