Irving Finkel:解讀古代文明的秘密與大洪水神話


關於本集

來賓介紹

Irving Finkel - 大英博物館中東部助理館長

Irving Finkel 博士是英國語言學家和亞述學家,自 1976 年起在大英博物館擔任古代美索不達米亞文字、語言和文化的助理館長。他負責管理館內約 13 萬件楔形文字泥板收藏,專精於蘇美爾、阿卡德、巴比倫和亞述文獻的解讀。他因發現並研究一塊記載比聖經諾亞故事更早的美索不達米亞洪水傳說泥板而聞名,並將研究成果寫成《The Ark Before Noah》一書,還參與建造了一艘根據泥板說明復原的圓形方舟。

主持人

Lex Fridman - MIT 研究科學家、Podcast 主持人

Lex Fridman 是 MIT 資訊與決策系統實驗室的研究科學家,專注於人工智慧、人機互動和機器學習研究。他的 Podcast 節目以深度對談著稱,涵蓋科學、技術、歷史、哲學等主題。


#487 – Irving Finkel: Deciphering Secrets of Ancient Civilizations & Flood Myths Irving Finkel is a scholar of ancient languages and a longtime curator at the British Museum, renowned for his expertise in Mesopotamian history and cuneiform writing. He specializes in reading and interpreting cuneiform inscriptions, including tablets from Sumerian, Akkadian, Babylonian, and Assyrian contexts. He became widely known for studying a tablet with a Mesopotamian flood story that predates the biblical Noah narrative, which he presented in his book “The Ark Before Noah” and in a documentary that involved building a circular ark based on the tablet’s technical instructions.

Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep487-sc

See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.

Transcript:

https://lexfridman.com/irving-finkel-transcript

CONTACT LEX:

Feedback – give feedback to Lex: https://lexfridman.com/survey

AMA – submit questions, videos or call-in: https://lexfridman.com/ama

Hiring – join our team: https://lexfridman.com/hiring

Other – other ways to get in touch: https://lexfridman.com/contact

EPISODE LINKS:

Irving’s Instagram: https://www.instagram.com/drirvingfinkel/

The Ark Before Noah (book): https://amzn.to/4j2U0DW

Irving Lectures Playlist: https://www.youtube.com/playlist?list=PLYXwZvOwHjVcFUi9iEqirkXRaCUJdXGha

British Museum Video Playlist: https://www.youtube.com/playlist?list=PL0LQM0SAx603A6p5EJ9DVcESqQReT7QyK

British Museum Website: https://www.britishmuseum.org/

The Great Diary Project: https://thegreatdiaryproject.co.uk/

SPONSORS:

To support this podcast, check out our sponsors & get discounts:

Shopify: Sell stuff online.

Go to https://shopify.com/lex

Miro: Online collaborative whiteboard platform.

Go to https://miro.com/

Chevron: Reliable energy for data centers.

Go to https://chevron.com/power

LMNT: Zero-sugar electrolyte drink mix.

Go to https://drinkLMNT.com/lex

AG1: All-in-one daily nutrition drink.

Go to https://drinkag1.com/lex

OUTLINE:

(00:00) – Introduction

(00:43) – Sponsors, Comments, and Reflections

(09:53) – Origins of human language

(15:59) – Cuneiform

(23:12) – Controversial theory about Göbekli Tepe

(34:23) – How to write and speak Cuneiform

(39:42) – Primitive human language

(41:26) – Development of writing systems

(42:20) – Decipherment of Cuneiform

(54:51) – Limits of language

(59:51) – Art of translation

(1:05:01) – Gods

(1:10:25) – Ghosts

(1:20:13) – Ancient flood stories

(1:30:21) – Noah’s Ark

(1:41:44) – The Royal Game of Ur

(1:54:43) – British Museum

(2:02:08) – Evolution of human civilization

====

The following is a conversation with Michael Levin, his second time on the podcast.

He is one of the most fascinating and brilliant biologists and scientists I’ve ever had the

pleasure of speaking with.

He and his labs at Tufts University study and build biological systems that help us understand

the nature of intelligence, agency, memory, consciousness, and life in all of its forms

here on Earth and beyond.

And now, a quick few-second mention of each sponsor.

Check them out in the description or at lexfriedman.com slash sponsors.

It is, in fact, the best way to support this podcast.

We’ve got Shopify for selling stuff online, CodeRabbit for AI-powered code review, Element

for electrolytes, UpliftDesk for my favorite office desks that I’m sitting behind right

now, Miro for brainstorming ideas with your team, and Masterclass for learning stuff.

From incredible people.

Choose wisely, my friends.

And now, on to the full ad reads.

I try to make them interesting, but if you must skip, please still check out the sponsors.

I enjoy their stuff.

Maybe you will too.

To get in touch with me, for whatever reason, go to lexfriedman.com slash contact.

All right, let’s go.

This episode is brought to you by Shopify, a platform designed for anyone to sell anywhere

with a great-looking online store with an engineering stack that utilizes the beauty and the elegance

of Ruby on Rails that DHH so beautifully articulated in my conversations with him.

I continue to tune in to DHH’s tweets and posts on X.

Just a beautiful human being.

And it’s just nice to know that he’s a big supporter of Shopify.

Him and Toby have been close for years.

It’s just nice to know that great human beings and great engineers can create great products

that also make a lot of money and also bring a lot of usefulness to the world.

So I will forever be celebrating Shopify, not just for the services that they create, but

for the people behind the scenes that are building it.

Sign up for a $1 per month trial period at shopify.com slash lex.

That’s all lowercase.

Go to shopify.com slash lex to take your business to the next level today.

Speaking of beautiful code and incredible people, this episode is also brought to you by CodeRabbit,

a platform that provides AI-powered code reviews directly within your terminal.

The thing I can definitively recommend especially is the CLI version of CodeRabbit.

It’s the most installed AI app on GitHub and GitLab.

2 million repositories in review.

13 million pull requests reviewed.

Basically, a lot of you listening to this know how to generate a bunch of code.

Some of it is AI slop.

Some of it is on the borderline.

But what CodeRabbit CLI does is, as they put it, vibe check your code.

So they do the review process to make sure you go from that AI-generated code to something

that’s actually production.

Ready by helping you catch errors.

And it supports all programming languages.

You absolutely must go now.

Install CodeRabbit CLI at coderabbit.ai slash lex.

That’s coderabbit.ai slash lex.

Please go support them.

Try it out.

You will not regret it if you’re at all a programmer or exploring programming.

Friends, don’t let friends vibe code without vibe checking the code.

All right?

Anyway, this episode is also brought to you by Element, my daily zero sugar and delicious electrolyte mix

that in all of my crazy travel I always bring with me.

That’s going to be tested when I go into the middle of nowhere for multiple weeks at a time

with just a backpack.

We’ll see.

We’ll see.

We’ll see.

But it’s so easy to bring with you.

It’s light.

It doesn’t take much space.

You just put it in some water.

First of all, it makes the water taste really good.

Second of all, it just balances the nutritional value of the water.

So you don’t want to overdrink water without any electrolytes.

It just makes me feel so good to get the right balance of sodium, potassium, and magnesium

when I am doing fasting for one day or two days at a time, or I’m doing crazy long-distance

runs.

One of the things I learned, actually, is you need to listen to your body.

You need to understand your body.

You need to understand what it needs, what makes you feel good mentally, physically.

And sometimes outside advice is good to incorporate, but really what you need to develop is the

ability to sense deeply the state of your body, what makes it feel good, what makes it feel

bad, have a really nice internal feedback controller that’s able to establish a happy, stress-free

existence.

Anyway, get a free eight-count sample pack with any purchase.

Try it at drinkelement.com slash lex.

This episode is also brought to you by Uplift Desk, my go-to for all office and podcast studio

furniture.

I don’t know if I want to say that all other desks suck, because that wouldn’t be very nice.

But I really want to say that because I’ve tried other desks, and Uplift Desk is what

made me truly happy.

I now have six Uplift Desks for the podcast, for my Windows machine for the video editing.

I have a bunch of Linux boxes.

I have the robotics desk.

I’m doing soldering, all this kind of stuff.

Anyway, all of these Uplift Desks, all of it makes me happy.

You can customize the crap out of whatever you want.

It’s over 200,000 possible desk combinations.

I’m pretty sure all of them are super sexy, super nice in both sitting and standing positions.

Plus, the people that come to install are just really kind human beings.

I just had wonderful experiences all throughout this, everything.

Everything involved with Uplift Desk has been really great.

So please go support them.

They’re great.

And the fact that they are supporting this podcast is like, what?

I’ve been a fan of this for many years before they were supporting this podcast.

The fact they’re doing that now is just, please go buy all their stuff so they keep supporting

this podcast.

Anyway, go to upliftdesk.com slash Lex and use code Lex to get four free accessories, free

same-day shipping, free returns, a 15-year warranty, and an extra discount off your entire order.

That’s U-P-L-I-F-T-D-E-S-K dot com slash Lex.

This episode is also brought to you by Miro, an online collaborative platform.

Miro’s innovation workspace blends AI and human creativity to turn ideas into results.

Miro’s innovation workspace blends AI and human creativity to turn ideas into results.

That’s, by the way, friends, what I’ve been working on, human-robot interaction, HRI.

I’m actually interested in the general problem of heterogeneous systems where you have both

humans and AIs and they have to work together, they have to understand each other, and all

of it fundamentally for the goal of the humans to flourish.

I’m forever humanity first.

So AI should be tools that make human lives better.

And of course, there’s much longer and fascinating discussion about that topic, on safety, on security, and in general, on human flourishing in this

21st century.

But that’s, friends, for another time.

In fact, we can brainstorm about it using Miro, which converts sticky notes, screenshots, all that kind of stuff into actual diagrams or prototypes

in minutes.

This episode is also brought to you by Masterclass, where you can watch over 200 classes from the best people in the world in their respective

disciplines.

There’s actually a few really nice classes from super famous people that have been added that, for me personally, have been interesting to watch.

So Kevin Hart did one.

He’s definitely a good example of somebody that you just can’t look away.

There’s something really charismatic and funny about their whole way of being.

Another guy like that is Will Ferrell.

In fact, they can both, I guess you could say, hold a stage presence.

There’s a new one from Stephen Curry and Lewis Hamilton, both folks I should probably almost definitely talk to.

Serena Williams, Gordon Ramsay, John Legend, the great Samuel L. Jackson, and the great Natalie Portman.

The list just keeps going.

Every topic you can think of, there’s really nothing else like this on earth.

I highly, highly recommend it.

It is one of the best ways to get to the essence of the thing, by learning from the people who have gotten to the very top of the world at that thing

.

Get unlimited access to every Masterclass and get an additional 15% off an annual membership at masterclass.com slash lexpod.

That’s masterclass.com slash lexpod.

This is the Lex Friedman Podcast.

To support it, please check out our sponsors in the description, where you can also find links to contact me, ask questions, give feedback, and so on

.

And now, dear friends, here’s Michael Levin.

You write that the central question at the heart of your work, from biological systems to computational ones, is how do embodied minds arise in the

physical world?

And what determines the capabilities and properties of those minds?

Can you unpack that question for us and maybe begin to answer it?

Well, the fundamental tension is in both the first person, the second person, and third person descriptions of minds.

So in third person, we want to understand how do we recognize them?

And how do we know, looking out into the world, what degree of agency there is and how best to relate to the different systems that we find?

And are our intuitions any good when we look at something and it looks really stupid and mechanical versus it really looks like there’s something

cognitive going on there?

How do we get good at recognizing them?

Then there’s the second person, which is the control, and that’s both for engineering but also for regenerative medicine when you want to tell the

system to do something, right?

What kind of tools are you going to use?

And this is a major part of my framework is that all of these kinds of things are operational claims.

Are you going to use the tools of hardware rewiring, of control theory and cybernetics, of behavior science, of psychoanalysis and love and

friendship?

Like what are the interaction protocols that you bring, right?

And then in first person, it’s this notion of having an inner perspective and being a system that has valence and cares about the outcome of things,

makes decisions and has memories and tells a story about itself and the outside world.

And how can all of that exist and still be consistent with the laws of physics and chemistry and various other things that we see around us?

So that I find to be maybe the most interesting and the most important mystery for all of us on the science and also on the personal level.

So that’s what I’m interested in.

So your work is focused on starting at the physics, going all the way to friendship and love and psychoanalysis.

Yeah, although actually I would turn that upside down.

I think that pyramid is backwards.

And I think it’s behavior science at the bottom.

I think it’s behavior science all the way.

I think in certain ways, even math is the behavior of a certain kind of being that lives in a latent space.

And physics is what we call systems that at least look to be amenable to a very simple, low agency kind of model and so on.

But that’s what I’m interested in is understanding that and developing applications, because it’s very important to me that what we do is transition

deep ideas and philosophy into actual practical applications that not only make it clear whether we’re making any progress or not, but also allow us

to relieve suffering and make life better for all sentient beings and enable us and others to reach their full potential.

So these are very practical things, I think.

Behavioral science, I suppose, is more subjective and mathematics and physics is more objective.

Would that be the clear difference?

The idea basically is that where something is on that spectrum, and I’ve called it the spectrum of persuadability, you could call it the spectrum of

intelligence or agency or something like that.

I like the notion of the spectrum of persuadability because it means that these are not things you can decide or have feelings about from a

philosophical armchair.

You have to make a hypothesis about which tools, which interaction protocols you’re going to bring to a given system.

And then we all get to find out how that worked out for you, right?

So you could be wrong in many ways.

In both directions, you can guess too high or too low or wrong in various ways.

And then we can all find out how that’s working out.

And so I do think that the behavior of certain objects is well described by specific formal rules, and we call those things the subject of

mathematics.

And then there are some other things whose behavior really requires the kinds of tools that we use in behavioral cognitive neuroscience.

And those are other kinds of minds that we think we study in biology or in psychology or other sciences.

Why are you using the term persuadability?

Who are you persuading and of what in this context?

Yeah.

The beginning of my work is very much in regenerative medicine, in bioengineering, things like that.

So for those kinds of systems, the question is always, how do you get the system to do what you want it to do?

So there are cells, there are molecular networks, there are materials, there are organs and tissues and synthetic beings and biobots and whatever.

And so the idea is, if I want your cells to regrow a limb, for example, if you’re injured and I want your cells to regrow a limb, I have many options

.

Some of those options are, I’m going to micromanage all of the molecular events that have to happen, right?

And there’s an incredible number of those.

Or maybe I just have to micromanage the cells and the stem cell kinds of signaling factors.

Or maybe actually I can give the cells a very high level prompt that says you really should build a limb and convince them to do it, right?

And so where, which of those is possible?

I mean, clearly people have a lot of intuitions about that.

If you ask standard people in regenerative medicine and molecular biology, they’re going to say, well, that convincing thing is crazy.

What we really should be doing is talking to the cells or better yet, the molecular networks.

And in fact, all the excitement of the biological sciences today are at, you know, single molecule approaches and big data and genomics and all of

that.

The assumption is that going down is where the action is going to be going down in scale.

And I think that’s wrong.

But the thing that we can say for sure is that you can’t guess that.

You have to do experiments and you have to see because you don’t know where any given system is on that spectrum of persuadability.

And it turns out that every time we look and we take tools from behavioral science, so learning, different kinds of training, different kinds of

models that are used in active inference and surprise minimization and perceptual multistability and visual illusions and all these kinds of

interesting things, you know, stress perception and memory, active memory reconstruction, all these interesting things.

When we apply them outside the brain to other kinds of living systems, we find novel discoveries and novel capabilities actually being able to get

the material to do new things that nobody had ever found before.

And precisely because I think that people didn’t look at it from those perspectives, they assumed that it was a low level kind of thing.

So when I say persuadability, I mean different types of approaches, right?

And we all know if you want to persuade your wind-up clock to do something, you’re not going to argue with it or make it feel guilty or anything.

You’re going to have to get in there with a wrench and you’re going to have to, you know, tune it up and do whatever.

If you want to do that same thing to a cell or a thermostat or an animal or a human, you’re going to be using other sets of tools that we’ve given

other names to.

And so that’s, now of course that spectrum, the important thing is that as you get to the right of that spectrum, as the agency of the system goes up

, it is no longer just about persuading it to do things.

It’s a bidirectional relationship, what Richard Watson would call a mutual vulnerable knowing.

So the idea is that on the right side of that spectrum, when systems reach the higher levels of agency, the idea is that you’re willing to let that

system persuade you of things as well.

You know, in molecular biology, you do things, hopefully the system does what you want to do, but you haven’t changed.

You’re still exactly the way you came in.

But on the right side of that spectrum, if you’re having interactions with even cells, but certainly, you know, dogs, other animals, maybe other

creatures soon, you’re not the same at the end of that interaction as you were going in.

It’s a mutual bidirectional relationship.

So it’s not just you persuading something else.

It’s not you pushing things.

It’s a mutual bidirectional set of persuasions, whether those are purely intellectual or of other kinds.

So in order to be effective at persuading an intelligent being, you yourself have to be persuadable.

So the closer in intelligence you are to the thing you’re trying to persuade, the more persuadable you have to become.

Hence the mutual vulnerable knowing.

What a term.

Yeah, you should talk to Richard as well.

He’s an amazing guy and he’s got some very interesting ideas about the intersection of cognition and evolution.

But, you know, I think what you bring up is very important because there has to be a kind of impedance match between what you’re looking for and the

tools that you’re using.

I think the reason physics always sees mechanism and not minds is that physics uses low agency tools.

You’ve got volt meters and rulers and things like this.

And if you use those tools as your interface, all you’re ever going to see is mechanisms and those kinds of things.

If you want to see minds, you have to use a mind, right?

You have to have, there has to be some degree of resonance between your interface and the thing you’re hoping to find.

You’ve said this about physics before.

Can you just linger on that, like expand on it?

What do you mean?

Why physics is not enough to understand life, to understand mind, to understand intelligence?

You make a lot of controversial statements with your work.

That’s one of them, because there are a lot of physicists that believe they can understand life, the emergence of life, the origin of life, the

origin of intelligence using the tools of physics.

In fact, all the other tools are a distraction to those folks.

If you want to understand fundamentally anything, you have to start a physics to them.

And you’re saying, no, physics is not enough.

Here’s the issue.

Everything here hangs on what it means to understand.

Okay.

For me, because understand doesn’t just mean have some sort of pleasing model that seems to capture some important aspect of what’s going on.

It also means that you have to be generative and creative in terms of capabilities.

And so, for me, that means if I tell you this is what I think about cognition in cells and tissues, it means, for example, that I think we’re going

to be able to take those ideas and use them to produce new regenerative medicine that actually helps people in various ways, right?

It’s just an example.

So, if you think as a physicist you’re going to have a complete understanding of what’s going on from that perspective of fields and particles and

who knows what else is at the bottom there, does that mean then that when somebody is missing a finger or has a psychological problem or has these

other high-level issues that you have something for them, that you’re going to be able to do something?

Because my claim is that you’re not going to.

And even if you have some theory of physics that is completely compatible with everything that’s going on, it’s not enough.

That’s not specific enough to enable you to solve the problems you need to solve.

In the end, when you need to solve those problems, the person you’re going to go to is not a physicist.

It’s going to be either a biologist or a psychiatrist or who knows, but it’s not going to be a physicist.

And the simple example is this.

You know, let’s say someone comes in here and tells you a beautiful mathematical proof.

Okay, it’s just really, you know, deep and beautiful.

And there’s a physicist nearby and he says, well, I know exactly what happened.

There were some air particles that moved from that guy’s mouth to your ear.

I see what goes on.

It moved the cilia in your ear and the electrical signals went up to your brain.

I mean, we have a complete accounting of what happened, done and done.

But if you want to understand what’s the more important aspect of that interaction, it’s not going to be found in the physics department.

It’s going to be found in the math department.

So that’s my only claim is that physics is an amazing lens with which to view the world, but you’re capturing certain things.

And if you want to stretch to sort of encompass these other things, we just don’t call that physics anymore.

We call that something else.

Okay, but you’re kind of speaking about the super complex organisms.

Can we go to the simplest possible thing where you first take a step over the line, the Cartesian cut, as you’ve called it, from the non-mind to mind

, from the non-living to living?

The simplest possible thing, isn’t that in the realm of physics to understand?

How do we understand that first step where you’re like, that thing is no mind, probably non-living, and here’s a living thing that has a mind, that

line.

I think that’s a really interesting line.

Maybe you can speak to the line as well.

And can physics help us understand it?

Yeah, let’s talk about it.

Well, first of all, of course it can.

I mean, it can help, meaning that I’m not saying physics is not helpful.

Of course it’s helpful.

It’s a very important lens on one slice of what’s going on in any of these systems.

But I think the most important thing I can say about that question is I don’t believe in any such line.

I don’t believe any of that exists.

I think there is a, I think it’s a continuum.

I think we as humans like to demarcate areas on that continuum and give them names because it makes life easier.

And then we have a lot of battles over, you know, so-called category errors when people transgress those categories.

I think most of those categories at this point, they may have done some good service at the beginning of when the scientific method was getting

started and so on.

I think at this point, they mostly hold back science.

Many, many categories that we can talk about are at this point very harmful to progress because what those categories do is they prevent you from

porting tools.

If you think that living things are fundamentally different from non-living things, or if you think that cognitive things are these like advanced bra

iny things that are very different from other kinds of systems, what you’re not going to do is take the tools that are appropriate to these kind of

cognitive systems, right?

So the tools that have been developed in behavioral science and so on, you’re never going to try them in other contexts because you’ve already

decided that there’s a categorical difference, that it would be a categorical error to apply them.

And people say this to me all the time is that you’re making a category error.

As if these categories were given to us, you know, from on high and we have to obey them forevermore, the category should change with the science.

So, yeah, I don’t believe in any such line.

And I think a physics story is very often a useful part of the story, but for most interesting things, it’s not the entire story.

Okay, so if there’s no line, is it still useful to talk about things like the origin of life?

That’s one of the big open mysteries before us as a human civilization, as scientifically minded, curious Homo sapiens.

How did this whole thing start?

Are you saying there is no start?

Is there a point where you could say that invention right there was the start of it all on earth?

My suggestion is that much better than trying to, in my experience, much better than trying to define any kind of a line.

Okay, because inevitably I’ve never found, and people try to, you know, we play this game all the time.

When I make my continuum claim, then people try to come, okay, well, what about this?

You know, what about this?

And I haven’t found one yet that really shoots that down, that you can’t zoom in and say, yeah, okay, but right before then this happened.

And then if we really look close, like here’s a bunch of steps in between, right?

Pretty much everything ends up being a continuum.

But here’s what I think is much more interesting than trying to make that line.

I think what’s really more useful is trying to understand the transformation process.

What is it that happened to scale up?

And I’ll give you a really dumb example.

And we always get into this because people often really, really don’t like this continuum view.

The word adult, right?

Everybody is going to say, look, I know what a baby is.

I know what an adult is.

You’re crazy to say that there’s no difference.

I’m not saying there’s no difference.

What I’m saying is the word adult is really helpful in court because you just need to move things along.

And so we’ve decided that if you’re 18, you’re an adult.

However, what it hides is what it completely conceals is the fact that, first of all, nothing happens on your 18th birthday, right?

That’s special.

Second, if you actually look at the data, the car rental companies actually have a much better estimate because they actually look at the accident

statistics and they’ll say it’s about 25 is really what you’re looking for, right?

So theirs is a little better.

It’s less arbitrary.

But in either case, what it’s hiding is the fact that we do not have a good story of what happened from the time that you were an egg to the time

that you’re the supposed adult.

And what is the scaling of repersonal responsibility, decision making, judgment?

Like these are deep fundamental contact, you know, questions.

Nobody wants to get into that every time somebody, you know, has a traffic ticket.

And so, okay, so we’ve just decided that this is this adult idea.

And of course, it does come up in court because then somebody has a brain tumor or somebody’s eating too many Twinkies or something has happened.

You say, look, that wasn’t me.

Whoever did that, I was on drugs.

Well, why’d you take the drugs?

Well, that was, you know, that was yesterday, me today.

So we get into these very deep questions that are completely glossed over by this idea of an adult.

So I think once you start scratching the surface, most of these categories are like that.

They’re convenient and they’re good.

You know, I get into this with neurons all the time.

I’ll ask people, what’s a neuron?

Like what’s really a neuron?

And yes, if you’re in neurobiology 101, of course, you just say, look, these are what neurons look like.

Let’s just study the neuroanatomy and we’re done.

But if you really want to understand what’s going on, well, neurons develop from other types of cells.

And that was a slow and gradual process.

And most of the cells in your body do the things that neurons do.

So what really is a neuron, right?

So once you start scratching this, this happens.

And I have some things that I think are coming out of our lab and others that are, I think, very interesting about the origin of life.

But I don’t think it’s about finding that one boom like this is.

Yeah, there are innovations, right?

There are innovations that allow you to scale in an amazing way, for sure.

And there are lots of people that study those, right?

So things that thermodynamic kind of metabolic things and all kinds of architectures and so on.

But I don’t think it’s about finding a line.

I think it’s about finding a scaling process.

Scaling process, but then there is more rapid scaling and there’s slower scaling.

So innovation, invention, I think is useful to understand so you can predict how likely it is on other planets, for example.

Or to be able to describe the likelihood of these kinds of phenomena happening in certain kinds of environments.

Again, specifically, in answering how many alien civilizations there are.

That’s why it’s useful.

But it’s also useful on a scientific level to have categories.

Not just because it makes us feel good and fuzzy inside.

But because it makes conversation possible and productive, I think.

If everything is a spectrum, it becomes difficult to make concrete statements, I think.

Like we even use the terms of biology and physics.

Those are categories.

Technically, it’s all the same thing, really.

Fundamentally, it’s all the same.

There’s no difference between biology and physics.

But it’s a useful category.

If you go to the physics department and the biology department, those people are different in some kind of categorical way.

So somehow, I don’t know what the chicken or the egg is, but the categories.

Maybe the categories create themselves because of the way we think about them and use them in language.

But it does seem useful.

Let me make the opposite argument.

They’re absolutely useful.

They’re useful specifically when you want to gloss over certain things.

The categories are exactly useful when there’s a whole bunch of stuff.

And this is what’s important about science is like the art of being able to say something without first having to say everything, right?

Which would make it impossible.

So categories are great when you want to say, look, I know there’s a bunch of stuff hidden here.

I’m going to ignore all that.

And we’re just going to like, let’s get on with this particular thing.

And all of that is great as long as you don’t lose track of the stuff that you glossed over.

And that was what I’m afraid is happening in a lot of different ways.

And in terms of, look, I’m very interested in life beyond Earth and all of these kinds of things.

Although we should also talk about what I call SUTI, the search for unconventional terrestrial intelligences.

I think we got much bigger issues than actually recognizing aliens off Earth.

But I’ll make this claim.

I think the categorical stuff is actually hurting that search.

Because if we try to define categories with the kinds of criteria that we’ve gotten used to,

we are going to be very poorly set up to recognize life in novel embodiments.

I think we have a kind of mind blindness.

I think this is really key.

It’s much, to me, the cognitive spectrum is much more interesting than the spectrum of life.

I think really what we’re talking about is a spectrum of cognition.

And it’s weird as a biologist to say, I don’t think life is all that interesting a category.

I think the categories of different types of minds, I think, is extremely interesting.

And to the extent that we think our categories are complete and are cutting nature at its joints,

we are going to be very poorly placed to recognize novel systems.

So, for example, a lot of people will say, well, this is intelligent and this isn’t, right?

And there’s a binary thing.

And that’s useful.

And occasionally that’s useful for some things.

I would like to say, instead of that, let’s make us, let’s admit that we have a spectrum.

But instead of just saying, oh, look, everything’s intelligent, right?

Because if you do that, you’re right.

You can’t do anything after that.

What I’d like to say instead is, no, no, you have to be very specific as to what kind and how much.

In other words, what problem space is it operating in?

What kind of mind does it have?

What kind of cognitive capacities does it have?

You have to actually be much more specific.

And we can even name, right?

That’s fine.

We can name different types of, I mean, this is doing predictive processing.

This can’t do that, but it can form memories.

What kind?

Well, habituation and sensitization, but not associative conditioning.

Like, it’s fine to have categories for specific capabilities, but it’s, it’s, it actually, I think it actually makes, makes for much more rigorous

discussions because it makes you say, what is it that you’re claiming this thing does?

And it works in both directions.

So, and so some people will say, well, that’s a, that’s a sell.

That can’t be intelligent.

And I’ll say, well, let’s be very specific.

Here are some claims about, here’s some problem solving that it’s doing.

Tell me why that doesn’t, you know, why doesn’t that match or in the opposite direction, somebody comes to me and says, you’re right, you’re right.

You know, the whole, the whole solar system, man, it’s just like this amazing, like, okay, what is it doing?

Like, tell me, tell me what, what tools of cognitive and behavioral science are you using to, to, to reach that conclusion, right?

And so I think, I think it’s actually much more productive to take this operational stance and say, tell, tell me what protocols you think you can

deploy with this thing that would lead you to, to, to, to use these terms.

To have a bit of a meta conversation about the conversation, I should say that part of the persuadability argument that we two intelligent creatures

are doing is me playing devil’s advocate every once in a while.

And you did the same, which is kind of interesting, taking the opposite view and see what comes out.

Because you don’t know the result of the argument until you have the argument.

And it’s, seems productive to just take the other side of the argument.

For sure.

It’s a very important thinking aid to, first of all, you know, what they call steel manning, right?

To try to, try to make the strongest possible case for the other side and to ask yourself, okay, what are all the, what are all the places that I’m

sort of glossing over because I don’t know exactly what to say?

And where are all the, where are all the holes in the argument and what would, what would, you know, a really good critique really look like?

Yeah.

Sorry to go back there just to linger on the term because it’s so interesting, persuadability.

Did I understand correctly that you mean that it’s kind of synonymous with intelligence?

So it’s an engineering centric view of an intelligence system because if it’s persuadable, you’re more focused on how can I steer the goals of the

system?

The behaviors of the system, which meaning an intelligence system maybe is a, is a goal oriented, goal driven system with agency.

And when you call it persuadable, you’re thinking more like, okay, here’s an intelligence system that I’m interacting with that I would like to get

it to accomplish certain things.

But fundamentally they’re synonymous or correlated, persuadability and intelligence.

They’re definitely correlated.

So, so let me, I want to, I want to preface this with, with one thing.

When I say it’s an engineering perspective, I don’t mean that the standard tools that we use in engineering and this idea of, of enforced control and

steering is how we should view all of the world.

I’m not saying that at all.

And, and, and I want to be very clear on the, because, because, because people do email me and say, ah, this engineering thing, you’re going to drain

the, you know, the life and the majesty out of these high end, like human conversation.

My whole, my whole point is not that at all.

It’s that, of course, at the right side of the spectrum, it doesn’t look like engineering anymore.

Right.

It looks like, it looks like friendship and love and psychoanalysis and all these other tools that we have.

But here’s what I want to do.

I want to be very specific to my colleagues in regenerative medicine and just imagine if I, you know, if I, if I went to a bioengineering department

or a genetics department and I started talking about high level, you know, cognition and psychoanalysis, right?

They didn’t want to hear that.

So, so I, I bring my, I focus on the engineering approach because I want to say, look, this is not a philosophical problem.

This is not a linguistics problem.

We are not trying to define terms in different ways to make anybody feel fuzzy.

What I’m telling you is if you want to reach certain capabilities, if you want to reprogram cancer, if you want to regrow new organs, you want to

defeat aging, you want to do these specific things, you are leaving too much on the table by making an unwarranted assumption that the low level

tools that we have.

So these are the rules of chemistry and the kind of remolecular rewiring that those are going to be sufficient to get to where you want to go.

It’s a, it’s a, it’s an assumption only, and it’s an unwarranted assumption.

And actually we’ve done experiments now.

So it’s not philosophy, but real experiments that if you take these other tools, you can in fact persuade the system in ways that has never been done

before.

And, and, and we can, we can unpack all of that, but it is, it is absolutely correlated with intelligence.

So let me flesh that out a little bit.

What I think is scaling in all of these things, right?

Cause I keep talking about the scaling.

So what is it that’s scaling?

What I think is scaling is something I call the cognitive light cone.

And the cognitive light cone is the size of the biggest goal state that you can pursue.

This doesn’t mean how far do your senses reach?

This doesn’t mean how far can you affect it?

So the James Webb telescope has enormous sensory reach, but that doesn’t mean that’s, that’s the size of its cognitive light cone.

The size of the cognitive light cone is the scale of the biggest goal you can actively pursue.

But I do think it’s a useful concept to enable us to think about very different types of agents of different composition, different provenance, you

know, engineered, evolved, hybrid, whatever, all in the same framework.

And by the way, the reason I use light cone is that it has this idea from physics that you’re putting space and time kind of in the same diagram,

which is, which, which I like here.

So if you tell me that all your goals revolve around maximizing the amount of sugar, the amount of sugar in this, in this, you know, 10, 20 micron

radius of space time, and that you have, you know, 20 minutes memory going back and maybe five minutes predictive capacity going forward, that tiny

little cognitive light cone, I’m going to say probably a bacterium.

And if you say to me that, well, I care, I’m able to care about several hundred yards, the sort of scale, I could never care about what happens three

weeks from now, two towns over, just impossible.

I’m saying you might be a dog.

And if, and if you say to me, okay, I care about really what happens, you know, the financial markets on earth, you know, long after I’m dead and

this and that, so you’re probably a human.

And if you say to me, I care in the linear range, I actively not, I’m not just saying that I can actively care in the linear range about all the

living beings on this planet.

I’m going to say, well, you’re not a standard human.

You must be something else because humans, I don’t know, these standard humans today, I don’t think can do that.

You must be some kind of a body sat for some other thing that has these massive cognitive light cones.

So I think what’s scaling from zero, and I do think it goes all the way down.

I think we can talk about even, even particles doing something like this.

I think what scales is the size of the cognitive light cone.

And so now this is an interesting, here, I’ll try for a definition of life or whatever, for whatever it’s worth.

I spent no time trying to make that stick, but if we wanted to, I think we call things alive to the extent that the cognitive light cone of that

thing is bigger than that of its parts.

So in other words, rocks aren’t very exciting because the things it knows how to do are the things that its parts already know how to do, which is

follow gradients and things like that.

But living things are amazing at aligning their, their competent parts so that the collective has a larger cognitive light cone than the parts.

I’ll give you a very simple example that comes up in biology and that comes up in our cancer program all the time.

Individual cells have little tiny cognitive light cones.

They, what are their goals?

Well, they’re trying to manage pH, metabolic state, some other things.

There are some goals in transcriptional space, some goals in metabolic space, some goals in physiological state space, but, but they, they’re

generally very tiny goals.

One thing evolution did was to provide a kind of cognitive glue, which we can also talk about, that ties them together into a multicellular system.

And those systems have grandiose goals.

They’re making limbs and, and if you’re a salamander limb and you chop it off, they will regrow that limb with the right number of fingers.

Then they’ll stop when it’s done.

The goal has been achieved.

No individual cell knows what a finger is or how many fingers you’re supposed to have, but the collective absolutely does.

And that process of growing that cognitive light cone from a single cell to something much bigger.

And of course the failure mode of that process.

So cancer, right?

When cells disconnect, they physiologically disconnect from the other cells.

Their cognitive light cone shrinks.

The boundary between self and world, which is what the cognitive light cone defines, uh, shrinks.

Now they’re back to an amoeba.

As far as they’re concerned, the rest of the body is just an external environment and they do what amoebas do.

They go where life is good.

They reproduce as much as they can, right?

So that, that cognitive light cone, that, that, that is the thing that I’m talking about that scales.

And so when we’re looking for life, I don’t think we’re looking for specific materials.

I don’t think we’re looking for specific metabolic states.

I think we’re looking for scales of cognitive light cone.

We’re looking for alignment of parts towards bigger goals in spaces that the parts could not comprehend.

And so cognitive light cone, just to, uh, make clear is about goals that you can actively pursue now.

You said linear, like within reach immediately.

No, I didn’t.

Sorry.

I didn’t mean that.

First of all, the goal necessarily is, is often removed in time.

So in other words, when you’re pursuing a goal, it means that you have a separation between current state and target state at minimum your third,

your thermostat, right?

Let’s just think about that.

There, there’s a separation in time because the thing you’re trying to make happen so that the temperature goes to a certain level is not true right

now.

And all your actions are going to be around reducing that error, right?

That basic homeostatic loop is all about closing that, that gap.

When I met, when I said linear range, this is what I meant.

Uh, if I say to you this, this terrible thing happened to, uh, you know, 10 people.

And, and, you know, you have some, some degree of activation about it.

And then they say, no, no, no, actually it was a hundred, you know, 10,000 people.

You’re not a thousand times more activated about it.

You’re somewhat more activated, but it’s not a thousand.

And if I say, oh my God, it was actually 10 million people.

You’re not a million times more activated.

You don’t have that capacity in the linear range.

You sort of, you’re sort of right.

If you think about that curve, we sort of, we reach a saturation point.

I have some amazing colleagues in the Buddhist community with whom we’ve written some papers

about this.

The radius of compassion is like, can you grow your cognitive system to the point that, yeah,

it really isn’t just your family group.

It really isn’t just a hundred people, you know, in your, in your, you know, circle.

Can you grow your cognitive, um, light cone to the point where, no, no, we care about the

whole, whether it’s all of humanity or the whole ecosystem or the whole, whatever.

Can you actually care about that?

The exact same way that we now care about a much smaller, um, set of people.

That’s what I mean by linear range.

But you said separated by time, like a thermostat, but a bacteria, I mean, if you zoom out far

enough, a bacteria could be formulated to have a goal state of creating human civilization.

Because if you look at the, you know, bacteria has a role to play in the whole history of

earth.

And so if you anthropomorphize the goals of a bacteria enough, I mean, it has a concrete

role to play in the history of the evolution of human civilization.

So you do need to, when you define a cognitive light cone, you’re looking at directly short

term behavior.

Well, no, how do you know what the cognitive light cone of something is?

Because as much, as you’ve said, it could be, it could be almost anything.

The key is you have to do experiments.

And the way you do experiments is you put barrier, you have to do interventional experiments.

You have to put barriers between it and its goal, and you have to ask what happens.

And intelligence is the degree of ingenuity that it has in overcoming barriers between it

and its goal.

Now, if it were to be that, now, this is, I think, a totally doable, but impractical and

very expensive experiment.

But you could imagine setting up a scenario where the bacteria were blocked from becoming

more complex.

And you can ask if they would try to find ways around it, or whether it’s actually, nah, their

goals are actually metabolic.

And as long as those goals are met, they’re not going to actually get around your barrier.

The, the, the, this, this, this business of putting barriers between things and their

goals is actually extremely powerful because we’ve deployed it in all kinds of, and I’m

sure, I’m sure we’ll get to this later, but we’ve, we’ve deployed it in all kinds of weird

systems that you wouldn’t think are goal-driven systems.

And what it allows us to do is to get beyond just the, the, the, what you call anthropomorphizing

claims of say, you know, saying, oh yeah, I think, you know, I think this is thing is

trying to do this or that.

The question is, well, let’s do the experiment.

And one other thing I want to say about anthropomorphizing is people, people say this to me all

the time.

Um, I, I, I don’t think that exists.

I think that’s kind of like, you know, uh, uh, and, and I’ll, I’ll, I’ll tell you why.

I think it’s like heresy or like, uh, other, other terms that, uh, aren’t really a thing

because if you, if you unpack it, here’s, here’s what anthropomorphism means.

Humans have a certain magic and you’re making a category error by attributing that magic

somewhere else.

My point is we have the same magic that everything has.

We have a couple of interesting things besides the cognitive icon and some other stuff.

And it isn’t that you have to keep the humans separate because there’s some bright line.

It’s just, it’s, it’s that same old, uh, all, all I’m, all I’m arguing for is the scientific

method.

Really.

That’s really all this is.

All I’m saying is you can’t just make pronouncements such as the humans are this and let’s not

sort of push that.

You have to do experiments after you’ve done your experiments.

You can say either I’ve done it and I found, look at that, that thing actually can predict

the future for the next, you know, 12 minutes.

It’s amazing.

Or you say, you know what?

I’ve tried all the things in the behaviorist handbook.

They just don’t help me with this.

It’s a very low level of like, that’s it.

It’s a, it’s a very low level of intelligence.

Fine.

Right.

Done.

So that’s really all I’m arguing for is an empirical approach.

And then things like anthropomorphism go away.

It’s just the matter of, have you done the experiment and what did you find?

And that’s actually one of the things you’re saying that, uh, if you remove the categorization

of things, you can use the tools of one discipline on everything.

You can try.

To try and then see.

That’s the underpinnings of the criticism of anthropomorphization because, uh, what is

that?

That’s like psychoanalysis of another human could technically be applied to, to robots, to AI

systems to more primitive biological systems and so on.

Try.

Yeah.

We’ve used everything from basic habituation conditioning all the way through anxiolytics,

hallucinogens, all kinds of cognitive modification on the range of things that you wouldn’t believe.

And by the way, I’m not the first person to come up with this.

So there was a guy named Bose well over a hundred years ago who was studying how anesthesia

affected animals and animal cells and drawing specific curves around electrical excitability.

And he then went and did it with plants and saw some very similar phenomena and being the genius

that he was, he then said, well, how do I don’t know when to stop, but there’s no, there’s

no, you know, everybody thinks we should have stopped long before plants because people

made fun of them for that.

And he’s like, yeah, but, but the science doesn’t tell us where to stop.

The tool is working.

Let’s keep going.

And he showed interesting phenomena on materials, metals and, and, and other kinds of materials.

Right.

And so, uh, the interesting thing is that, yeah, there is no, there is no, uh, you know,

generic rule that tells you when, uh, when do you need to stop?

We make those up.

Those are completely made up.

You have to just, uh, you have to do the science and find out.

Yeah.

You, uh, we’ll probably get to it.

Uh, you’ve been doing recent work on looking at computational systems, even trivial ones

like algorithms and sorting algorithms and analyzing the behavioral kind of way.

See if there’s minds inside those sorting algorithms.

And of course, let me make a pothead statement question here that you can start to do things

like, uh, trying to do psychedelics with a sorting.

Yeah.

And what does that even look like?

Yeah.

It looks like a ridiculous question.

It’ll get you fired from most academic departments, but it may be, if you take it seriously, you

could try and see if it applies.

Yeah.

If it has, if a thing could be shown to have some kind of cognitive complexity, some kind

of mind, why not apply to it the same kind of analysis and the same kind of tools like

psychedelics that you would to a human mind?

That’s a complex human mind at least might be a productive question to ask what, because

you’ve seen like spiders on psychedelics, like more primitive biological organisms on

psychedelics.

Why not try to see what an algorithm does on psychedelics?

Well, yeah, because you see, the thing to remember is we don’t have a magic sense or really

good intuition for what the mapping is between the embodiment of something and the degree of

intelligence it has.

We think we do because we have an N of one example on earth and we kind of know what to

expect from cells, snakes, you know, primates.

But we really don’t.

We don’t have, and this is, we’ll get into more of the stuff on the platonic space, but

our intuitions around that stuff is so bad that to really think that we know enough not to

try things at this point is, I think, really short-sighted.

Before we talk about the platonic space, let’s lay out some foundations.

I think one useful one comes from the paper, Technological Approach to Mind Everywhere, an

experimentally grounded framework for understanding diverse bodies and minds.

Could you tell me about this framework and maybe can you tell me about figure one from this

paper that has a few components?

One is the tiers of biological cognition that goes from group to whole organism to whole tissue

organ, down to neural network, down to cytoskeleton, down to genetic network.

And then there’s layers of biological systems from ecosystem, down to swarm, down to organism,

tissue, and then finally cell.

So can you explain this figure and can you explain the TAME so-called framework?

So this is the version 1.0 and there’s a kind of update at 2.0 that I’m writing at the

moment, trying to formalize in a careful way all the things that we’ve been talking about

here and in particular, this notion of having to do experiments to figure out where any given

system is on a continuum.

And we can, let’s just start with figure two maybe for a second and then we’ll come back

to figure one.

And first just to unpack the acronym, I like the idea that it spells out TAME because the

central focus of this is interactions and how do you interact with a system to have a

productive interaction with it.

And the idea is that cognitive claims are really protocol claims.

When you tell me that something has some degree of intelligence, what you’re really saying

is this is the set of tools I’m going to deploy and we can all find out how that worked out

for you.

And so technological, because I wanted to be clear with my colleagues that this was not

a project in just philosophy.

This had very specific empirical implications that are going to play out in engineering and

regenerative medicine and so on.

Technological approach to mind everywhere, this idea that we don’t know yet where different

kinds of minds are to be found and we have to empirically figure that out.

And so what you see here in figure two is basically this idea that there is a spectrum and I’m

just showing four waypoints along that spectrum.

And as you move to the right of that spectrum, a couple of things happen.

Persuadability goes up, meaning that the systems become more reprogrammable, more plastic,

more able to do different things than whatever they’re standardly doing.

So you have more ability to get them to do new and interesting things.

The effort needed to exert influence goes down.

That is autonomy goes up.

And to the extent that you are good at convincing or motivating the system to do things, you don’t

have to sweat the details as much, right?

And this also has to do with what I call engineering agential materials.

So when you engineer wood, metal, plastic, things like that, you are responsible for absolutely

everything because the material is not going to do anything other than hopefully hold its

shape.

If you’re engineering active matter or you’re engineering computational materials or better

yet, agential materials like living matter, you can do some very high level prompting and

let the system then do very complicated things that you don’t need to micromanage.

And we all know that that increases when you’re starting to work with intelligent systems like

animals and humans and so on.

And the other thing that goes down as you get to the right is the amount of mechanism or

physics that you need to exert the influence goes down.

So if you know how your thermostat is to be set as far as its set point, you really don’t

need to know much of anything else, right?

You just need to know that it is a homeostatic system and that this is how I change the set

point.

You don’t need to know how the cooling and heating plant works in order to get it to do

complex things.

By the way, quick pause just for people who are listening, let me describe what’s in the

figure.

So there’s four different systems going up the scale of persuadability.

So the first system is a mechanical clock.

Then it’s a thermostat.

Then it’s a dog that gets rewards and punishments.

Pavlov’s dog.

And then finally, a bunch of very smart looking humans communicating with each other and arguing,

persuading each other using hashtag reasons.

And then there’s arrows below that showing persuadability going up as you go up these systems from the

mechanical clock to a bunch of Greeks arguing and then going down as the effort needed to

exert influence.

And once again, going down as mechanism knowledge needed to exert that influence.

Yeah.

I’ll give you an example about that panel C here with the dog.

Isn’t it amazing that humans have been training dogs and horses for thousands of years knowing

zero neuroscience?

Also amazing is that when I’m talking to you right now, I don’t need to worry about manipulating

all of the synaptic proteins in your brain to make you understand what I’m saying and hopefully

remember it.

You’re going to do that all on your own.

I’m giving you very thin in terms of information content, very thin prompts.

And I’m counting on you as a multiscale agential material to take care of the chemistry underneath.

So you don’t need a wrench to convince me.

Correct.

I don’t need and I don’t need physics to convince you and I don’t need to know how you work.

Like I don’t need to understand all of the steps.

What I do need to have is trust that you are a multiscale cognitive system that already does that for

yourself.

And you do like, this is an amazing thing.

I don’t, people don’t think about this enough.

I think, uh, when you wake up in the morning and you have social goals, research goals,

financial goals, whatever, whatever it is that you have in order for you to act on those

goals, sodium and calcium and other ions have to cross your muscle membranes.

Those incredibly abstract goal states ultimately have to make the chemistry dance in a very particular

way.

Our entire body is, is, is a transducer of, of very abstract things.

And, and by the way, not just our, our brains, but other, you know, our organs have, um, uh,

anatomical goals and other things that we can talk about because all of this, uh, plays out in, uh, in,

in, in regeneration and development and so on.

But that the scaling, right, of all of these things, the way that the way you regulate yourself is not by,

oh my God, you don’t have to sit there and think, wow, I really have to push some, some, you know,

some sodiums across this membrane.

All of that happens automatically.

And that’s the, that’s the incredible benefit of these multi-scale materials.

So what I was trying to do in this paper is a couple of things.

All of these were, by the way, drawn by Jeremy Gay, who’s this amazing graphic artist that works with me.

First of all, in panel A, which is the spiral I was trying to point out is that at every level of biological

organization, like we all know where it’s sort of nested dolls of, uh, you know, organs and tissues and

cells and molecules and whatever.

But what I was trying to point out is that this is not just structural.

Every one of those layers is competent and is doing problem solving in different spaces and spaces that are very hard for us to imagine.

We humans are, because of our own evolutionary history, we are so obsessed with movement in three-dimensional space that even, even in AI, you see

this all the time.

And they say, well, this thing doesn’t have a robotic body.

It’s not embodied.

Yeah.

It’s not embodied by moving around in 3D space, but biology has embodiments in all kinds of spaces that are hard for us to imagine.

Right.

So your cells and tissues are moving in high-dimensional physiological state spaces and in, uh, gene expression state spaces in anatomical state

spaces.

They’re doing that perception, decision-making action loop that we do in 3D space.

When we think about robots wandering around your kitchen, they’re doing those loops in these other spaces.

And so the first thing I was trying to point out is that, yeah, every layer of your body has its own ability to solve problems in those spaces.

And then, um, on the right, what I was saying is that this distinction between, you know, people say, well, there are living beings and then there

are engineered machines.

And then they often follow up with all the things machines are never going to be able to do and whatever.

And so what I was trying to point out here is that it is very difficult to maintain those kinds of distinctions because life is incredibly interoper

able.

Uh, life doesn’t really care if, if, um, the thing it’s working with was evolved through random trial and error or was engineered with a higher

degree of, of agency.

Because at every level within the cell, within the tissue, within the organism, within the collective, you, you can replace and substitute engineered

systems with the naturally evolved systems.

And that question of, is it really, you know, is it biology or is it technology?

I don’t think it’s a useful question anymore.

So I was trying to warm people up with this idea that what we’re going to do now is talk about minds in general, regardless of their history or their

composition.

It doesn’t matter what you’re made of.

It doesn’t matter how you got here.

Let’s talk about what you’re able to do and what your inner world looks like.

That was the goal of that.

Is it useful to, as a thought experiment, as an experiment of radical empathy, to try to put ourselves in the space of the different minds at each

stage of the spiral?

It’s like, what state space is human and civilization as a collective embodied?

Like, what does it operate in?

So humans, individual organisms operate in 3D space.

That’s what we understand.

But when there’s a bunch of us together, what are we doing together?

It’s really hard and you have to do experiments, which at larger scales are, you know, really difficult.

But there is such a thing.

There may well be.

We have to do experiments.

I don’t know.

There’s an example.

Somebody will say to me, well, you know, with your kind of panpsychist view, you probably think the weather is agential too.

Well, I can’t say that, but we don’t know.

But have you ever tried to see if a hurricane has habituation or sensitization?

Maybe.

We haven’t done the experiment.

It’s hard, but you could, right?

And maybe weather systems can have certain kinds of memories.

I have no idea.

We have to do experiments.

So I don’t know what the entire human society is doing, but I’ll just give you a simple example of the kinds of tools.

And we’re actively trying to build tools now to enable radically different agents to communicate.

So we are doing this using AI and other tools to try and get this kind of communication going across very different spaces.

I’ll just give you a very kind of dumb example of how that might be.

Imagine that you’re playing tic-tac-toe against an alien.

So you’re in a room.

You don’t see him.

So you draw the tic-tac-toe thing on the floor, and you know what you’re doing.

You’re trying to make straight lines with X’s and O’s.

And you’re having a nice game.

It’s obvious that he understands the process.

Like, sometimes you win, sometimes you lose.

Like, it’s obvious.

In that one little segment of activity, you guys are sharing a world.

What’s happening in the other room next door?

Well, let’s say the alien doesn’t know anything about geometry.

He doesn’t understand straight lines.

What he’s doing is he’s got a box, and it’s full of basically billiard balls, each one of which has a number on it.

And all he’s doing is he’s looking through the box to find billiard balls whose numbers add up to 15.

He doesn’t understand geometry at all.

All he understands is arithmetic.

You don’t think about arithmetic.

You think geometry.

The reason you guys are playing the same game is that there’s this magic square, right, that somebody could construct it that basically is a three-by

-three square where if you pick the numbers right, they add up to 15.

He has no idea that there’s a geometric interpretation to this.

He is solving the problem that he sees, which is totally algebra.

You don’t know anything about that.

But if there is an appropriate interface like this magic square, you guys can share that experience.

You can have an experience.

It doesn’t mean you start to think like him.

It means that you guys are able to interact in a particular way.

Okay.

So there’s a mapping between the two different ways of seeing the world that allows you to communicate with each other.

Of seeing a thin slice of the world.

Thin slice of the world.

How do you find that mapping?

So you’re saying we’re trying to figure out ways of finding that mapping for different kinds of systems.

What’s the process for doing that?

So the process is twofold.

One is to get a better understanding of what the system, what space is the system navigating?

What goals does it have?

What level of ingenuity does it have to reach those goals?

For example, xenobots, right?

We make xenobots.

This is, or anthrobots.

These are biological systems that have never existed on Earth before.

We have no idea what their cognitive properties are.

We’re learning.

We found some things.

But you can’t predict that from first principles because they’re not at all what their past history would inform you of.

Can you actually explain briefly what a xenobot is and what an anthrobot is?

So one of the things that we’ve been doing is trying to create novel beings that have never been here before.

The reason is that typically when you have a biological system, an animal or a plant, and you say,

hey, why does it have certain forms of behavior, certain forms of anatomy, certain forms of physiology?

Why does it have those?

The answer is always the same.

Well, there’s a history of evolutionary selection, and there’s a long history going back of adaptation,

and there are certain environments, and this is what survived, and so that’s why it has.

So what I wanted to do was break out of that mold and to basically force us as a community to dig deeper into where these things come from,

and that means taking away the crutch where you just say, well, it’s evolutionary selection.

That’s why it looks like that.

So in order to do that, we have to make artificial synthetic beings.

Now, to be clear, we are starting with living cells, so it’s not that they had no evolutionary history.

The cells do.

They had evolutionary history in frogs or humans or whatever,

but the creatures they make and the capabilities that these creatures have were never directly selected for,

and in fact, they never existed.

So you can’t tell the same kind of story, and what I mean is we can take epithelial cells off of an early frog embryo,

and we don’t change the DNA, no synthetic biology circuits, no material scaffolds, no nanomaterials, no weird drugs, none of that.

What we’re mostly doing is liberating them from the instructive influences of the rest of the cells that they were in in their bodies.

And so when you do that, normally these cells are bullied by their neighboring cells into having a very boring life.

They become a two-dimensional outer covering for the embryo, and they keep out the bacteria, and that’s that.

So you might ask, well, what are these cells capable of when you take them away from that influence?

So when you do that, they form another little life form we call a xenobot,

and it’s this self-motile little thing that has cilia covering its surface.

The cilia are coordinated, so they row against the water, and then the thing starts to move,

and has all kinds of amazing properties.

It has different gene expression, so it has its own novel transcriptome.

It’s able to do things like kinematic self-replication, meaning make copies of itself from loose cells that you put in its environment.

It has the ability to respond to sound, which normal embryos don’t do.

It has these novel capacities, and we did that, and we said,

look, here are some amazing features of this novel system.

Let’s try to understand where they came from.

And some people said, well, maybe it’s a frog-specific thing, you know?

Maybe this is just something unique to frog cells.

And so he said, okay, what’s the furthest you can get from frog embryonic cells?

How about human adult cells?

And so we took cells from adult human patients who were donating tracheal epithelia for biopsies and things like that.

And those cells in, again, no genetic change, nothing like that.

They self-organized into something we call anthrobots.

Again, self-motile little creature.

9,000 different gene expressions.

So about half the genome is now different.

And they have interesting abilities.

For example, they can heal human neural wounds.

So in vitro, if you plate some neurons and you put a big scratch through it so you damage them,

anthrobots can sit down and they will try, they will spontaneously, without us having to teach them to do it,

they will spontaneously try to knit the neurons across.

What is this video that we’re looking at here?

So this is an anthrobot.

So often when I give talks about this, I show people this video and I say, what do you think this is?

And people will say, well, it looks like some primitive organism you got from the bottom of a pond somewhere.

And I’ll say, well, what do you think the genome would look like?

And is it, well, the genome would look like some primitive creature.

Right.

If you sequence that thing, you’ll get 100% Homo sapiens.

And that doesn’t look like any stage of normal human development.

It doesn’t act like any stage of human development.

It has the ability to move around.

It has, as I said, over 9,000 differential gene expressions.

Also, interestingly, it is younger than the cells that it comes from.

So it actually has the ability to roll back its age.

And we can talk about that and what the implications of that are.

But to go back to your original question, what we’re doing with these kinds of systems.

Try and talk to it.

We’re trying to talk to it.

That’s exactly right.

And not just to this, we’re trying to talk to molecular networks.

So we found a couple of years ago, we found that gene regulatory networks, nevermind the cells,

but the molecular pathways inside of cells can have several different kinds of learning, including Pavlovian conditioning.

And what we’re doing now is trying to talk to it.

The biomedical applications are obvious.

Instead of, hey, Siri, you want, hey, liver, why do I feel like crap today?

And you want an answer.

Well, you know, your potassium levels are this and that.

And I don’t feel, you know, I don’t feel good for these reasons.

And you should be able to talk to these things.

And there should be able to be an interface that allows us to communicate, right?

And I think AI is going to be a huge component of that interface of allowing us to talk to these systems.

It’s a tool to combat our mind blindness, to help us see diverse, other very unconventional minds that are all around us.

Can you generalize that?

Let’s say we meet an alien or an unconventional mind here on Earth.

Think of it as a black box.

You show up.

What’s the procedure for trying to get some hooks into a communication protocol with a thing?

Yeah, that is exactly the mission of my lab.

It is to enable us to develop tools to recognize these things, to learn to communicate with them, to ethically relate to them, and in general, to

expand our ability to do this in the world around us.

I specifically chose these kinds of things because they’re not as alien as proper aliens would be.

So we have some hope.

I mean, we’re made of them.

We have many things in common.

There’s some hope of understanding them.

You’re talking about xenobots and that.

Xenobots and anthropots and cells and everything else, but they’re alien in a couple of important ways.

One is the space they live in is very hard for us to imagine.

What space do they live in?

Well, your body, your body cells, long before we had a brain that was good for navigating three-dimensional space, was navigating the space of anatom

ical possibilities.

It was going from you start as an egg and you have to become a snake or a giraffe or a human, whatever we’re going to be.

And I specifically am telling you that this general idea when people model that with kind of cellular automata type of ideas, this open loop kind of

thing where, well, everything just follows local rules and eventually there’s complexity and here you go.

Now you’ve got a giraffe or a human.

I’m specifically telling you that that model is totally insufficient to grasp what’s actually going on.

What’s actually going on, and there have been many, many experiments on this, is that the system is navigating a space.

It is navigating a space of anatomical possibilities.

If you try to block where it’s going, it will try to get around you.

If you try to challenge it with things it’s never seen before, it will try to come up with a solution.

If you really defeat its ability to do that, which you can, you know, they’re not infinitely intelligent, so you can defeat them, you will either get

birth defects or you will get creative problem solving, such as what you’re seeing here with xenobots and anthropots.

If you can’t be a human, you’ll be some, you’ll find another way to be in.

You can be an anthropot, for example, or you’ll be something else.

Just to clarify, what’s the difference between cellular automata type of action where you’re just responding to your local environment and creating

some kind of complex behavior and operating in the space of anatomical possibilities?

So there’s a kind of goal, I guess you’re taking, there is some kind of thing, there’s a will to be.

X something.

The will thing, let’s put that aside because that’s it.

Well, it’s fine too.

There I go, anthropomorphic.

I just always love to quote Nietzsche.

Yeah, yeah, yeah.

And I’m not saying, I’m not saying that’s wrong.

I’m just saying I don’t have data for that one, but I’ll tell you the stuff that I’m quite certain of.

There are a couple of different formalisms that we have in control theory.

One of those formalisms is open loop complexity.

In other words, I’ve got a bunch of subunits like a cellular automaton.

They follow certain rules and you turn the crank, time goes forward, whatever happens, happens.

Now, clearly you can get complexity from this.

Clearly you can get some very interesting looking things, right?

So the game of life, all those kinds of cool things, right?

You can get complexity.

No problem.

But the idea that that model is going to be sufficient to explain and control things like morphogenesis is a hypothesis.

It’s okay to make that hypothesis, but we know it’s false.

Despite the fact that that is what we learn, you know, in basic cell biology and developmental biology classes, when the first time you see something

like this, inevitably, especially if you’re an engineer in those classes, you raise it and you go, how does it know to do that?

How does it know, you know, four fingers instead of seven?

What they tell you is it doesn’t know anything.

Make sure that that’s very clear.

They all insist that when we learn these things, they insist.

Nothing here knows anything.

There are rules of chemistry.

They roll forward and this is what happens.

Okay, now that model is testable.

We can ask, does that model explain what happens?

Here’s where that model falls down.

If you have that model and situations change, either there’s damage or something in the environment that’s happened, those kind of open loop models

do not adjust to give you the same goal by different means.

This is William James’ definition of intelligence is same goal by different means.

And in particular, working them backwards, let’s say you’re in regenerative medicine and you say, okay, but this is the situation now.

I want it to be different.

What should the rules be?

It’s not reversible.

So the thing with those kind of open loop models is they’re not reversible.

You don’t know what to do to make the outcome that you want.

All you know how to do is roll them forward.

Right?

Now, in biology, we see the following.

Uh, if, if you have a developmental system and you put barriers between, so I’m going to give you two pieces of evidence that suggests that there is

a goal.

One piece of evidence is that if you try to block these things from the outcome that they normally have, they will do some amazing things.

Uh, sometimes very clever things, sometimes not at all the way that they normally do it.

Right?

So this is William James’s definition by different means, by following different trajectories, they will go around various local maxima and minima to

get to where they need to go.

It is navigation of a space.

It is not blind, turn the crank and wherever we end up is where we end up.

That is not what we see experimentally.

And more importantly, I think what we’ve shown, and this is, this is, um, uh, this is something that I’m particularly happy with in our lab for over

the last 20 years.

We’ve shown the following.

We can actually rewrite the goal states because we found them.

We, we have shown through, uh, through our work on bioelectric imaging and bioelectric reprogramming, we have actually shown how those goal memories

are encoded.

At least in some cases, we certainly haven’t got them all, but we have some, if you can find where the goal state is encoded, read it out and reset

it.

And the system will now implement a new goal based on what you just reset.

That is the ultimate, uh, evidence that, that your goal, uh, directed model is working because if there was no goal,

that shouldn’t be possible.

You shouldn’t be right.

Once you can find it, read it, uh, interpret it and rewrite it.

It means that by, by any engineering standard, it means that you’re dealing with a homeostatic mechanism.

How do you find where the goal is encoded?

So through lots and lots of hard work.

The barrier thing is part of that creating barriers and observing.

The barrier thing tells you that it, you should be looking for a goal.

So step one, when you approach a genetic system is create a barrier of different kinds until you see how persistent it is.

That pursuing the thing it seemed to have been pursuing originally.

And then, you know, okay, cool.

This is a, this thing has agency, first of all.

And then second of all, like you start to build an intuition about exactly which goal it’s pursuing.

Yes.

The first couple of steps are all imagination.

You have to ask yourself, what space is this thing even working in?

And, and, and you really have to stretch your mind because, because we can’t imagine all

the spaces that systems work in, right?

So, so step one is what space is it?

Step two, what do I think the goal is?

And let’s not mistake step two.

You’re not done just because you haven’t made a hypothesis.

That doesn’t mean you can say, well, there, there, I see it doing this.

Therefore, that’s the goal.

You don’t know that you have to actually do experiments.

Now, once you’ve made those hypotheses, now you do the experiments.

You say, okay, if I want to block it from reaching its goal, how do I do that?

And this, by the way, is exactly the, the approach we took with the sorting algorithms and with

everything else.

You, you, you hypothesize the goal, you put a barrier in, and then you get to find out what

level of ingenuity it has.

Maybe what you see is, well, that derailed everything.

So probably this thing isn’t very smart or you see, oh, wow, it can go around and do these

things.

Or you might say, wow, it’s taking a completely different approach using its affordances in

novel ways.

Like that’s a high level of intelligence.

You, you will find out what the, what the answer is.

Another broadhead question.

Is it possible to look at, uh, speaking of unconventional organisms and going to Richard

Dawkins, for example, with memes, is it possible to think of things like ideas, like how weird

can we get?

Can we look at ideas as organisms, then creating barriers for those ideas and seeing are the ideas

themselves, if you take the actual individual ideas and trying to empathize and visualize what

kind of space they might be operating in.

Can they be seen as organisms that have a mind?

Yeah.

Um, okay.

If you want to get really weird, we can, we can get, we can get really weird here.

Uh, think about the, uh, caterpillar butterfly transition.

Okay.

So you got a caterpillar soft-bodied kind of creature as a particular controller that’s suitable

for running a soft body, you know, kind of robot.

It has a brain for that task.

And then it has to become this butterfly hard-bodied creature flies around.

During the process of metamorphosis, its brain is basically ripped up and rebuilt from, from,

from scratch.

Right now, what’s been found is that if you train the caterpillar, so you give it a new

memory, meaning that if you, if the caterpillar sees this color disc, then it crawls over

and eat some leaves.

Turns out the butterfly retains that memory.

Now, the obvious question is how the hell do you retain memories when the medium is being

refactored like that?

Let’s put that aside.

That’s something that I’m going to get somewhere even weirder than that.

There’s something else that’s even more interesting than that.

It’s not just that you have to, uh, retain the memory.

You have to remap that memory onto a completely new context, because guess what?

The butterfly doesn’t move the way the caterpillar moves and it doesn’t care about leaves.

It wants nectar from, from flowers.

And so if you’re going, if that memory is going to survive, it can’t just persist.

It has to be remapped, be remapped into a novel context.

Now here’s what I, now here’s, here’s where things get weird.

We can take a couple of different perspectives here.

We can take the perspective of the caterpillar.

Facing some sort of crazy singularity and say, my God, I’m going to, I’m going to cease

to exist, but, but, you know, I’ll sort of be reborn in this new higher dimensional world

where I’ll fly.

Okay.

So that’s one thing we can take the perspective of the butterfly and say that, well, here I

am, but you know, I seem to be saddled with some, some tendencies and some memories,

and I don’t know where the hell they came from.

And, and, and I don’t remember exactly how I got them.

And they seem to be a core part of my psychological makeup.

And, and, and, you know, they’re, they’re, they come from somewhere.

I don’t know where they come from.

Right.

So you can take that perspective, but there’s a third perspective that I think is really interesting

and useful.

And the third perspective is that of the memory itself.

If you take a perspective of the memory, which, which, what, so what is the memory?

It is a pattern.

It is an informational pattern that was continuously reinforced within one cognitive system.

And now here I am, I’m this memory.

What do I need to do to persist into the future?

Well, now I’m facing the paradox of change.

If I, if I try to remain the same, I’m gone.

There’s no way the butterfly is going to retain me in, in the original form that I’m in now.

What I need to do is, is change, adapt, and morph.

Now you might say, well, that’s kind of crazy.

Uh, well, how are you taking the perspective of a, of a, of a pattern within an excitable

medium, right?

Agents are physical things.

You’re talking about the, you’re talking about information, right?

So, so let me, let me tell you another quick science fiction story.

Imagine that, uh, some creatures come out from the center of the earth.

They live down in the core.

They’re super dense.

Okay.

They’re incredibly dense because they live down in the core.

They have gamma ray vision for, you know, for, and so on.

So they come out to the surface.

What do they see?

Well, all of this stuff that we’re seeing here, this is like a thin plasma to them.

They, they are so dense.

None of this is, is, is, is solid to them.

They don’t see any of this stuff.

So they’re walking around, you know, they’re, they’re, the planet is sort of, uh, you know,

covered in this like thin gas, you know, and one of them is a scientist and he’s, and

he’s taking measurements of the gas.

And he says to the others, you know, I’ve been watching this gas and they’re like

little whirlpools in this gas.

And they almost look like agents.

They almost look like they’re doing things.

They they’re moving around.

They kind of hold themselves together for a little bit and they’re trying to make stuff

happen.

And, and, and the, the others say, well, that’s crazy.

Patterns in the gas can’t be agents.

We are agents.

We’re, we’re solid.

This is just patterns in an excitable medium.

And by the way, how long do they hold together?

He says, well, about a hundred years.

That’s crazy.

Nothing, you know, no, no real agent can, can exist to be that dissipate that fast.

Okay.

We are all metabolic patterns among other things.

Right.

And so one of the things that, and so you see what I’m warming up to, to here.

So, so one of the things that we’ve been trying to dissolve, and this is like some work that

I’ve done with Chris Fields and others is this distinction between thoughts and thinkers.

So, uh, all agents are patterns within some excitable medium.

We could talk about what, what that is, and they can spawn off others.

And now you can have a really interesting spectrum.

Here’s the, here’s the spectrum.

Um, you can have fleeting thoughts, which are like waves in, in, in the ocean.

When you throw a rock in, you know, they sort of, they sort of go through the excitable

medium and then they’re gone.

They pass through and they’re gone.

Right.

So those are, those are kind of fleeting thoughts.

Then you can have patterns that have a degree of persistence.

So they might be hurricanes or solitons or persistent thoughts or earworms.

Or depressive thoughts.

Those are harder to get rid of.

They, they stick around for a little while.

They often do a little bit of niche construction.

So they change the actual brain to have, to make it easier to have more of those thoughts.

Right.

Like that’s a, that’s a thing.

And so they, they, they stay around longer.

Now, uh, what’s, what’s further than that?

Well, fragments, personality fragments of a dissociative personality disorder.

They’re more, more stable and they’re not just on autopilot.

They have goals and they can do things.

And then past that as a full blown human personality.

And who the hell knows what’s past that?

Maybe some sort of transhuman, you know, transpersonal, like, I don’t know.

Right.

But, but this idea, again, I’m back to this notion of a spectrum.

It’s, there is not a sharp distinction between, you know, we are real agents and then we have

these, these, these thoughts.

Yeah.

Patterns can be agents too, but again, you don’t know until you do the experiment.

So if you want to know whether a soliton or a hurricane or a thought within a cognitive

system is its own agent, do the experiment, see what it can do.

Does it, can it learn from experience?

Does it have memories?

Does it have goal states?

Does it, you know, what, what can it do?

Right.

Does it have language?

So, so, uh, coming back to then during the original question.

Yeah, we can definitely apply this methodology to ideas and concepts and, and, and social,

um, uh, you know, whatevers, but you’ve got to do the experiment.

That’s such a challenging thought experiment of like thinking about memories from the caterpillar

to the butterflies and organism.

I think at the very basic level, intuitively, we think of organisms as hardware and, uh, software

is not possibly being able to be organisms, but what you’re saying is that it’s all just

patterns in an excitable medium and it doesn’t really matter what the pattern is.

we need to, and what, and what the excitable medium is, we need to do the testing of what,

how persistent is it, how goal oriented is it?

And there’s certain kind of tests to do that.

And you can apply that to memories.

You can apply that to ideas.

You can apply that to anything really.

I mean, you could probably think about like consciousness.

You can, there’s really no, um, boundary to what you can imagine.

Probably really, really wild things could be, could be minds.

Yeah.

Stay tuned.

I mean, this is exactly what we’re doing.

We’re getting progressively like more and more unconventional.

I mean, so, so this, so this whole distinction between software and hardware, I think, I think

it’s a super important, uh, concept to think about.

And, and yet the way we’ve mapped it onto the world, I, I, I would, I would like to blow that

up in the, in the following way.

Um, and, and again, I want to point out, so, so I’ll tell you what the, what the practical,

um, consequences are, because this is not just, you know, a fun stories that we tell each

other.

These have really important research, um, implications.

Think about a Turing machine.

So one thing you can say is the machine’s the agent, it has passive data and it operates

on the data and that’s it.

The story of agency is the story of whatever that machine can and can’t do.

The data is passive and it moves it around.

You can tell the opposite story.

You can say, look, the patterns on the data are the agent.

The machine is a stigmatic scratch pad in the world of the data doing what data does.

The machine is just the consequences, the scratch pad of it working itself out.

And both of those stories make sense, depending on what you’re trying to do.

Here’s the, um, the biomedical side of things.

So our bio, our, our program in bioelectrics and aging.

Okay.

One model you could have is the physical organism is the agent and the cellular collective has

pattern memories, specifically what I was saying before goals, anatomical goals.

If you want to, if you want to persist for a hundred plus years, your cells better remember

what your correct shape is and where the new cells go, right?

So there are these pattern memories that exist during embryogenesis, during regeneration, during

resistance to aging, we can see them, we can visualize them.

One thing you can imagine is fine.

The physical body, the cells are the agent, the electrical pattern memories are just data.

And what might happen during aging is that, uh, the, the data might get degraded.

They might get fuzzy.

And so what we need to do is reinforce the day, reinforce the memories, reinforce the pattern

memories.

That’s one, that’s one specific research program.

And we’re doing that, but that’s not the only research program because the other thing

you might imagine is that what if the patterns are the agent in exactly the same sense as we

think in our brains, it’s the, uh, patterns of, uh, electrophysiological, um, you know, computations

and whatever else that is the agent, right?

And that what they’re doing in the brain are the side effects of the patterns working themselves

out.

And those side effects might be to fire off some muscles and some glands and some other

things from that perspective.

Maybe what’s actually happening is maybe the agents finding it harder and harder to be embodied

in the physical world.

Why?

Because the cells might get less, um, responsive.

In other words, there’s the cells are sluggish.

The patterns are fine.

They’re having a harder time making the cells do what they need to do.

And that may be what you need to do is not reinforce the memories.

Maybe what you need to do is make the cells more responsive to them.

And that is a different research agenda.

So, which, which we are also doing.

We have evidence for that as well, actually now.

And then we’ve, we published it recently.

And so my point here is when we tell these crazy sci-fi stories, the only worth to them

and the only reason I’m talking about them now, and I hadn’t been up, you know, a year

ago, I wasn’t talking about this stuff is because these are now actionable in terms of specific

experimental research agendas that are heading to the clinic.

I hope in, uh, in some of these biomedical approaches.

And so now here we can go beyond this and we can say, okay, so up until now we’ve considered

what, what are disease states?

Well, we know there’s organic disease.

Something is physically broken.

We can see the tissue is breaking down.

There’s this damage in the joint, you know, what the liver is doing, whatever, you know,

we can see these things, but what about disease states that are not physical states?

Is there physiological states or informational states or cognitive problems?

So in other words, in all of these other spaces in the, you can start to ask what’s a barrier

in gene expression space.

What’s a local minimum, uh, that traps you in physiological state space.

And what is a stress pattern that keeps itself together, moves around the body, causes damage,

tries to keep itself going, right?

What, what level of agency does it have?

This suggests an entirely different, uh, set of approaches to, to biomedicine.

And, you know, anybody who, who’s, let’s say in the, uh, alternative medicine community

is, is probably yelling at the screen and saying, we’ve been saying this for hundreds of years

and yeah, but, but, and, and I’m, I’m well aware.

These are not, the ideas are not new.

What’s new is being able to now take this and make them actionable and say, yeah, but we

can image this now.

I can now actually see the bioelectric patterns and why they go here and not there.

And we have the tools that now hopefully will get us to, to, to therapeutics.

So this is, this is very actionable stuff and it all leans on not assuming we know minds

when we see them because we don’t, and we have to do experiments.

To return back to the software hardware distinction, you’re saying that we can see the software as

the organism and the hardware is just the scratch pad, or you can see the hardware as the organism

and the software as the thing that the hardware generates.

And in so doing, we can decrease the amount of importance we assign to something like the

human brain, where it could be the activations.

It could be the electrical signals that are the organisms and the brain is the scratch pad.

And by saying scratch pad, I don’t mean it’s not important.

When we get to talking about the platonic space, we, we have to talk about how important the

interface actually is.

It’s, it’s, it’s, the scratch pad isn’t unimportant.

The scratch pad is critical.

It’s just that my only point is that when we have these, uh, formalisms of software, of

hardware, of other things, the way we map those formalisms onto the world is not obvious.

It’s not given to us.

We, we get used to certain things, right?

But, but, but who’s the hardware, who’s the software, who’s the agent and who’s the, who’s

the excitable medium is, is to be determined.

So this is the good place to talk about the increasingly radical, weird ideas that you’ve been writing

about.

You’ve mentioned it a few times, the platonic space.

So there’s this ingressing minds paper where you described the platonic space.

You mentioned there’s an asynchronous conference happening, uh, which is a fascinating concept

because it’s asynchronous.

People are just contributing asynchronously.

So what happened was this crazy notion, which I’ll describe momentarily, I have given a couple

of talks on it.

I then found a couple of papers in the machine learning community called, uh, the platonic

representation hypothesis.

And I said, that’s pretty cool.

These guys are climbing up to the same point where I’m getting at it from biology and philosophy

and whatever they’re getting there from computer science and machine learning.

We’ll take a couple hours.

I’ll give a talk.

They’ll give a talk.

We’ll talk about it.

I thought there were going to be three talks at this thing.

Once I started reaching out to people for this, everybody sort of said, you know, I know somebody

who’s really into this stuff, but they never talk about it because there’s no audience for

this.

So I reached out to them and then they said, yeah, I know this, this mathematician or I

know this, uh, you know, uh, economist, whatever, who has these ideas and there’s no way we can

ever talk about them.

So I got this whole list and it became completely obvious that we can’t do this in a normal,

you know, it’s, we’re now booked up through, through December.

So every week in our, in our center, somebody gives a talk.

We, we kind of discussed it.

It all goes on this thing.

I’ll give you a link to it.

And then there’s a, there’s a huge running discussion after that.

And then in the end, we’re all going to get together for an actual real time discussion

section and talk about it.

But there’s going to be probably 15 or so talks about this from, from all kinds of disciplines.

It’s blown up in a way that I didn’t realize how much undercurrent, uh, of these ideas had

already, um, existed that were ready.

Like now, now is the time.

And I think this is like, I’ve been thinking about these things for, I don’t know, 30 plus

years.

I never talked about them before because they weren’t actionable before.

There wasn’t a way to actually make empirical progress with this.

Now, you know, this is something that Pythagoras and Plato and probably many people before

them talked about, but now we’re to the point where we can actually do experiments and they’re

making a difference in, in our research program.

You can just, uh, look at our platonic space, uh, conference.

There’s a, there’s a bunch of different fascinating talks.

Yours first on the patterns of forms and behavior beyond emergence, then, uh, radical platonism

and, uh, radical empiricism from Joe Dietz and, uh, patterns and explanatory gaps in psychotherapy.

Does God play dice from Alexi Tolchinsky and so on.

So let’s talk about it.

What is it?

And it’s, it’s fascinating that the origins of some of these ideas are connected to, um,

ML people thinking about representation space.

Yeah.

The first thing I want to say is that while I’m currently calling it the platonic space,

I am in no way trying to stick close to the things that Plato actually thought about.

In fact, to whatever extent we even know what that is, I think I depart from that in quite

in, in some ways.

And I’m going to have to change the name at some point.

The reason I’m using the name now is because I wanted to be clear about a particular connection

to mathematics, which, which a lot of mathematicians would call themselves Platonists because what

they think they’re doing is discovering, uh, not, not inventing as a human construction,

but discovering a structured ordered space of truths.

Let’s put it this way.

Um, in biology, as in physics, there’s something very curious that happens that if you keep asking

why the, then there’s some, something, something interesting goes on.

Let’s, let’s, uh, well, I’ll give you two examples.

First of all, imagine, um, cicadas.

So the cicadas come out at 13 years and 17 years.

Okay.

And so if you’re a biologist and you say, so why is that?

And then you get this explanation for, well, it’s because they’re trying to be off cycle

from their predators.

Cause if it was 12 years, then every two year, every three year, every four year, every six

year predator would, would, would eat you when you come out.

Right.

So, and you say, okay, okay, cool.

Um, that makes sense.

What’s special about 13 and 17?

Oh, they’re prime.

Uh-huh.

And why are they prime?

Well, now you’re in the math department.

You’re no longer in the biology department.

You’re no longer in the physics department.

You’ve now, you’re now in the math department to understand why the distribution of primes

is what it is.

Another example, and I’m not a physicist, but what I see is every time you, you talk to

a physicist and you say, Hey, uh, why do the, you know, leptons do this or that,

or the fermions are doing whatever.

Eventually the answer is, oh, because there’s this mathematical, you know, this

SU eight group or whatever the heck it is.

And it has certain symmetries in these certain structures.

Yeah.

Great.

Once again, you’re in the math department.

So, so something interesting happens is that there are facts that you come across.

Many of them are very surprising.

You don’t get to design them.

You get more out than you put in, in a certain way, because you make very minimal assumptions

and then certain facts are thrust upon you.

For example, the value of Feigenbaum’s constant, the value of natural algorithm E, these things

you sort of discover, right?

And the salient fact is this.

If those facts were different, then biology and physics would be different, right?

So they matter.

They, they impact instructively, functionally, they impact the physical world.

If the distribution of primes was something else, well, then the cicadas would have been

coming out at different times, but the reverse isn’t true.

What I mean is there is nothing you can do in the physical world to change E, as far as

I know, to change E or to change Feigenbaum’s constant.

You could have swapped out all the constants at the Big Bang, right?

You can change all the different things.

You are not going to change those things.

So, so this, I think Plato and Pythagoras understood very clearly that there is a set

of truths which impact the physical world, but they themselves are not defined by and determined

by what happens in the physical world.

You can’t change them by things you do in the physical world, right?

And so I’ll make a couple of claims about that.

One claim is, I think we call physics, those things that are constrained by those patterns.

When you say, Hey, why is this the way it is?

Ah, it’s because this is how symmetry, symmetries or, or, you know, topology or whatever.

Biology are the things that are enabled by those.

They’re free lunches.

Their biology exploits these kinds of truths.

And, uh, and really it enables biology and evolution to do amazing things without having

to pay for it.

I think there’s a lot of free, free lunches going on here.

And so I show you a Xenobot or an Anthrobot and, uh, I say, Hey, look, here are some amazing

things they’re doing that tissue has never done before in their history.

You say, first of all, where did that come from?

And when did we pay the computational cost for it?

Because we know when we paid the computational cost to design a frog or a human, it was for

the eons that the genome was bashing against the environment getting selected, right?

So you pay the computational cost of that.

There’s never been any Anthrobots.

There’s never been any Xenobots.

When do we pay the computational cost for designing kinematic self-replication and, you know, all

these things that they’re able to do.

So there’s two things people say.

One is, well, it’s sort of, you got it at the same time that they were being selected to

be good humans and good frogs.

Now, the problem with that is it kind of undermines the point of evolution.

The point of evolutionary theory was to have a very tight specificity between what, how

you are now and the history of selection that got you here, right?

The history of environments that got you to this point.

If you say, yeah, okay, so this is what your environmental history was.

And by the way, you got something completely different.

You got, you got these other skills that you didn’t know about that.

That’s really strange, right?

And so then what people say is, well, it’s emergent.

And they say, what’s that?

What does that mean?

And they say, well, besides the fact that you got surprised, right?

Emergence is often just means I didn’t see it coming.

You know, there was something happened.

I didn’t know that was going to happen.

So what does it mean that it’s emergent?

And people say, well, and there are many emergent things like this.

For example, the fact that gene regulatory networks can do associative learning.

Like that’s amazing.

And you don’t need evolution for that.

Even random genetic regulatory networks can do associative learning.

I say, why, why, why, why does that happen?

And they say, well, it’s just a fact that holds in the world.

Just a fact that holds.

So, so now you have a, you have, you have an option and you can go one of two ways.

You can either say, okay, look, I like my sparse ontology.

I don’t want to think about weird platonic spaces.

I’m a physicalist.

I want the physical world, nothing more.

So what we’re going to do is when we come across these crazy things that are very specific,

like, you know, anthropots have four specific behaviors that they switch around.

Why, why four?

Why not 12?

Why not one?

Like four, why four?

When we come across these things, just like when we come across the value of E or Feigenbaum’s

number or whatever, what we’re going to do is we’re going to write it down in our big

book of emergence.

And that’s it.

We’re just gonna have to live with it.

This is, this is what happens.

We’re just, you know, there’s some cool surprises.

You know, when we come across them, we’re going to write them down.

Great.

It’s a random grab bag of stuff.

And when we come across them, we’ll write them down.

That’s, that’s one.

The upside is you get to be a physicalist and you get to keep your, your sparse ontology.

The downside is I find it incredibly pessimistic and mysterious because you’re basically then

just willing to make a catalog of these, of these amazing patterns.

why not instead, and this is why I started with this, with this platonic, uh, terminology.

Why not do what the mathematicians already do?

A huge number of them say, we are going to make the same optimistic assumption that science

makes that there’s an underlying structure to that latent space.

There’s not like a random grab bag of stuff.

There’s a space to it, which, where these patterns come from.

And by studying them systematically, we can get from one to another.

We can map out the space.

We can, we can find out the relationships between them.

We can get an idea of what’s in that space.

And we’re not going to assume that it’s just random.

We’re going to assume there’s some kind of structure to it.

And you’ll see all kinds of people.

I mean, you know, well-known mathematicians that talk about this stuff, you know, Penrose

and lots of other people who will say that, yeah, there’s another space physically and it

has, it has spatial structure.

It has components to it.

And so on, we can traverse that space in various ways.

Uh, and then, and then there’s the physical space.

So I, I find, I find that much more, um, appealing because it suggests a research program,

which we are now undergoing in our lab.

The research program is everything that we make cells, embryos, robots, biobots, language

models, simple machines, all of it.

They are interfaces.

They are inter, all physical things are interfaces to these patterns.

You build an interface.

Some of those patterns are going to come through that interface, depending on what you build.

Some patterns versus others are going to come through.

The research program is mapping out that relationship between the physical pointers that we make and

the patterns that come through it, right?

Understanding what is the structure of that space?

What exists in that space?

And what do I need to make physically to make certain patterns come through?

Now, when I say patterns, now, now we have to ask what kinds of things live in that space?

Well, the mathematicians will tell what we already know.

We have a whole list of objects, you know, the amplituhedrons and the, you know, all this

crazy stuff that lives in that space.

Yeah, I think that’s one layer of stuff that lives in that space.

But I think those patterns are the lower agency kinds of things that are basically studied

by mathematicians.

What also lives in that space are much more active, more complex, higher agency patterns

that we recognize as kinds of minds.

That behavioral scientists would look at that pattern and say, well, I know what that is.

That’s the competency for delayed gratification or problem solving of certain kinds or whatever.

And so, so what I end up with right now is a model in which that latent space contains

things that come through physical objects.

So simple, simple patterns, right?

So, so facts about triangles and, and Fibonacci, you know, patterns and fractals and things

like that.

But also if you make more complex interfaces such as biologicals and, but, but importantly,

not just biologicals, but let’s say cells and embryos and tissues, what you will then pull

down is much more complex patterns than we say, ah, that’s a, that’s a, that’s a mind, that’s

a human mind, or that’s a, you know, snake mind or whatever.

So I think the mind brain relationship is exactly the kind of thing that the math physics relationship

is that in some very interesting way, there are truths of mathematics that become embodied

and they kind of haunt physical objects, right?

In a very specific functional way.

And in the exact same way, there are other patterns that are much more complex, higher agency

patterns that basically, uh, in form, in form living things that we see as, as obvious embodied

minds.

Okay.

Given how weird and complicated this, uh, we’re describing is, we’ll talk about it more, but

you got to ELI five, the basics to a person has never seen this.

So again, you mentioned things like pointers.

So the physical object themselves or the brain is a pointer to that platonic space.

What is in that platonic space?

What is the platonic space?

What is the embodiment?

What is the pointer?

Yeah.

Um, okay.

Let’s, let’s try, let’s try it this way.

Um, there are certain facts of mathematics.

So the distribution of prime numbers, right?

That if you map them out, they make these nice spirals.

And there’s an image that I often show, which is a very particular kind of, um, fractal.

Uh, and that fractal is the Halley map, which is, it’s, it’s pretty awesome that it actually

looks very organic.

It looks very biological.

So if you look at that thing, that image, which has a very specific complex structure,

it’s a map of a very compact mathematical object.

That formula is like, you know, Z cubed plus seven.

It’s something like that.

That’s it.

So now, so now you look at that structure and you say, where does that actually come

from?

It’s definitely not packed into the Z cubed plus seven.

It’s not, there’s not enough bits in that to give you all of that.

There’s no fact of physics that determines this.

There’s no evolutionary history.

It’s not like we selected this based on some, you know, from, from a larger set over time.

Where does this come from?

Or, or the fact that I think about, think about the way that biology exploits these things.

Imagine, imagine a world in which the highest fitness belonged to a certain kind of triangle,

right?

So evolution cranks a bunch of generations and it gets the first angle, right?

And cranks a bunch more generations, gets a second angle, right?

Now there’s, now there’s something amazing that happens.

It doesn’t need to look for the third angle because you already know.

If you know to, you get this magical free gift from geometry that says, why I already

know what the third one should be.

You don’t have to go look for it.

Or as evolution, if you invent a voltage gated ion channel, which is basically a transistor,

right?

And you can make a logic gate, then all the truth tables and the fact that NAND is special

and all these other things, you don’t have to evolve those things.

You get those for free.

You inherit those.

Where do all those things live?

These mathematical truths that you come across that you don’t have any choice about.

You can’t, you know, once you’ve committed to certain axioms, there’s a whole bunch of

other stuff that is now just, it is what it is.

And so what I’m saying is, and this is, this is what, what, what Pythagoras was, was saying,

I think that there is a whole space of these kinds of truths.

Now he was focused on, on mathematical ones, but, but he was embodying them in music and

in geometry and then things like that.

There are the, the space of patterns and, uh, and they make a difference in the physical

world to machines, to sound, to things like that.

What I’m extending it.

And what I’m saying is, yeah.

And so far we’ve only been looking at the low agency inhabitants of that world.

There are other patterns that we would recognize as kinds of minds and that you don’t see them

in this space until there’s an interface, until there’s a way for them to come through the

physical world, that interface, the same, the same way that you have to make a triangular

object before you can actually see the rule of, you know, what, what you’re going to gain

right out of, out of the rules of geometry and whatever, or you have to actually do the

computation on the fractal before you actually see that pattern.

If you want to see some of those minds, you have to build an interface, right?

At least, at least if you’re going to interact with them in the physical world, the way we

normally do science, uh, as Darwin said, mathematicians have their own new sense and

like a different sense than the rest of us.

And so that’s right.

You, you know, mathematicians can, can perhaps interact with these, uh, with these patterns

directly in that space.

But for the rest of us, we have to make interfaces.

And when we make interfaces, which might be cells or robots or, you know, embryos or whatever,

what we are pulling down are minds that are fundamentally not produced by

physics.

So I don’t believe that.

I don’t know if we’re going to get into the whole consciousness thing, but I don’t believe

that we create consciousness, whether we make babies or whether we make robots, nobody’s

creating consciousness.

What you create is an interface, a physical interface through which specific patterns, which

we call kinds of minds are going to ingress, right?

And, and, and consciousness is what it looks like from that direction, looking out into the

world.

It’s, it’s, it’s what we call the view from the perspective of the platonic patterns.

Just to clarify, what you’re saying is a pretty radical idea here.

So if, uh, there’s a mapping from mathematics to physics, okay, that’s understandable, intuitive,

as you’ve described.

But what you’re suggesting is there’s a mapping from some kind of abstract mind object to an

embodied brain that we think of as a mind.

As us fellow humans, what is that, what exactly, cause you said interface, you’ve also said

pointer.

So the brain, and I think you said somewhere a thin interface.

A thin client.

Yeah.

The brain is a thin client.

Yeah.

Thin client.

Okay.

So you’re, a brain is a thin client to this other world.

Yeah.

Can you just lay out very clearly how radical the idea is?

Sure.

Cause you’re kind of dancing around.

I think you could also, uh, point to Donald Hoffman and kind of who speaks of an interface,

uh, to a world.

So we’ve only interact with the quote unquote real world through an interface.

What is the connection here?

Yeah.

Um, okay.

A couple of things.

First of all, when you said it makes sense for physics, I want to show that it’s not as

simple as it sounds because what it means is that even in Newton’s world.

Boring, uh, sort of classical universe long before quantum anything, Newton’s world, physicalism

was already dead in, in, by, in Newton’s world.

I mean, think about what that means.

This is, this is nuts because, because already he knew perfectly well.

Pythagoras and Plato knew that even in a totally classical deterministic world, already you have

the ingression of information that determines what happens and what’s possible and what’s

not possible in that world from a space that is itself not physical.

In other words, it’s something like the natural logarithm E, right?

Nothing in Newton’s world is set to the value of E.

There is nothing you could do to set the value of E in that world.

And yet that fact that it was that and not something else governed all sorts of properties

of things that happen.

His, that classical world was already haunted by patterns from outside that world.

That’s this, this should be like, this is, this is, this is wild.

This is, this is not saying that, okay, everything was, was cool.

Physicalism was great up until, you know, maybe we got quantum interfaces or we got, you know,

consciousness or whatever, but, but originally it was fine.

No, this is saying that it was that, that worldview was already, uh, impossible really for since

from, from a very long time ago, we already knew that there are non-physical properties

that matter in the physical world.

This is a chicken or the egg question.

You’re saying Newton’s laws are creating the physical world?

That is a, that is a very deep follow-on question that, that I will, we’ll, we’ll, we’ll come

back to in a minute.

What, what I, all I was saying about Newton is that in the law, you don’t need quantum

anything.

You don’t need to think about consciousness.

You already, long before you got to any of that, as, as Pythagoras, I think knew,

already we have the idea that this physical world is being strongly impacted by truths

that do not live in the physical world.

Which truths are we referring to?

Are we talking about Newton’s law, like mathematical equations?

Mathematical, mathematical facts.

So for example, the actual value of E or.

Oh, like very primitive mathematical facts.

Yeah.

Yeah.

I mean, some of them are, you know, I mean, if you, if you ask Don Hoffman, there’s this

like amplituhedron thing that, that is a set of mathematical objects that determines

all the scattering amplitudes of the particles and whatever.

They don’t have to be simple.

I mean, the old ones were simple.

Now they’re like crazy.

I can’t, I can’t imagine this amplituhedron thing that maybe they can.

But, but, but all of these are mathematical structures that explain and determine facts

about the physical world.

Right.

If you ask physicists, Hey, why, you know, this many of this type of particle, because

this mathematical thing has the symmetries.

That’s why.

So Newton is discovering these things.

They’re not, he’s not inventing.

This is very controversial, right?

And there are of course, physicists and mathematicians who, who disagree with what I’m saying for

sure.

But what I’m leaning on is simply this.

I don’t know if anything you can do in the physical world at the big, you’re around at

the big bang, you get to set all the constants, set physics, however you want.

Can you change E?

Can you change Feigenbaum’s constant?

I don’t think you can.

Is that an obvious statement?

I don’t even know what it means to change the parameters at the start of the big bang.

So physicists do this.

They’ll say, okay, you know, if we made the, if we made the, the ratio between the, the,

the, you know, the gravitation and the electromagnetic force different, would we have matter?

Would we, how many dimensions would we have?

Would there be inflation?

Would there be this or that?

Right.

You can, you can imagine playing with it.

There are however many unitless constants of physics.

These are the kind of like knobs on the universe that, that, that you could have could in theory

be different.

And then you’d have different, you’d have different, you’d have different physical properties.

You’re saying that’s not going to change the axiomatic systems that mathematics has.

What I’m not saying is that every alien everywhere is going to have the exact same math that we

have.

That’s not what I’m claiming.

Although maybe, but that’s not what I’m claiming.

What I’m saying is you get more out than you put in.

Once you’ve made a choice and maybe some alien somewhere made a different choice of how

they’re going to do their math.

But once you’ve made your choice, then you get saddled with a whole bunch of new truths

that you discover that you can’t do anything about.

They are given to you from somewhere and you can say they’re random or you can say, no,

there’s a space of these facts that they’re pulled from.

There’s a latent space of options that they come from.

So when you get, so when your E is exactly 2.718 and so on, there is nothing you can do

in physics to change it.

And you’re saying that space is immutable.

It’s…

I’m not saying it’s immutable.

So I think Plato may or may not have thought that these forms are eternal and unchanging.

That’s one place we differ.

I actually think that space has some action to it, maybe even some computation to it.

But we’re just pointers.

Okay, that’s…

Well, so let’s…

Okay, so I’ll circle back around to that whole thing.

So the only thing I was trying to do is blow up the idea that we’re cool with how it works

in physics, no problem there.

Like, I don’t…

Like, I think that’s a much bigger deal than people normally think it is.

I think already there, you have this weird haunting of the physical world by patterns

that are not coming from the physical world.

The reason I emphasize this is because now what I’m going to…

When I amplify this into biology, I don’t think it sort of jumps as a new thing.

I think it’s just a much more…

I think what we call biology is our systems that exploit the hell out of it.

I think physics is constrained by it.

But we call biology those things that make use of those kinds of things and run with it.

And so, again, I just think it’s a scaling.

I don’t think it’s a brand new thing that happens.

I think it’s a scaling, right?

So, what I’m saying is, we already know from physics that there are non-physical patterns,

and these are generally patterns of form, which is why I call them low agency, because

they’re like fractals that stand still, and they’re like prime number distributions.

Although there’s a mathematician that’s talking in our symposium that’s telling me that actually

I’m too chauvinistic even there.

Actually, even those things have more oomph than even I gave them credit for, which I love.

So, what I’m saying is, those kind of static patterns are things that we typically see in

physics, but they’re not the full extent of what lives in that space.

That space is also home to some patterns that are very high agency.

And if we give them a body, if we build a body that they can inhabit, then we get to see

different behavioral competencies that the behavior scientists say, oh, I know what that

looks like.

That’s this kind of behavioral, you know, this kind of mind or that kind of mind.

In a certain sense, I mean, yes, what I’m saying is extremely radical, but it is a very old

idea.

It’s an old idea of a dualistic worldview, right, where the mind was not in the physical

body and that it in some way interacted with the physical brain.

So, I just want to be clear, I’m not claiming that this is fundamentally a new idea.

This has been around for forever.

However, it’s mostly been discredited and it’s a very unpopular view nowadays.

There are very few people in the, for example, cognitive science community or anywhere else

in science that like this kind of view.

Primarily, and already Descartes was getting crap for this when he first trotted out, is

this interaction problem, right?

So, the idea was, okay, well, if you have this non-physical mind and then you have this

brain that presumably obeys conservation of mass energy and things like that, how are you

supposed to, you know, how are you supposed to interact with it?

There are many other problems there.

So, what I’m trying to point out is that, first of all, physics already had this problem.

You didn’t have to wait until you had biology and cognitive science to ask about it.

And what I think is happening in the way, the way we need to think about this is, coming

back to my point, that I think the mind-brain relationship is basically of the same kind as

the math-physics relationship.

The same way that non-physical facts of physics haunt physical objects is basically how I think

different kinds of patterns that we call kinds of minds are manifesting through interfaces

like brains.

How do we prove or disprove the existence of that world?

Because it’s a pretty radical one.

Yeah.

Because this physical world can poke.

It’s there.

It feels like all the incredible things like consciousness and cognition and all the goal-oriented

behavior and agency all seems to come from this 3D entity.

Yeah.

I mean…

And so, like, we can test it, we can poke it, we can hit it with a stick.

Yeah, sort of.

Makes noises.

Sort of.

I mean, so Descartes got some stuff wrong, I think.

But one thing that he did get right, the fact that, yeah, actually, you don’t know what

you can poke and what you can’t poke.

The only thing you actually know are the contents of your mind and everything else might be,

and in fact, what we know from Anil Seth and Don Hoffman and various other people, it’s

definitely a construct.

You might be on drugs and you might wake up tomorrow and say, my God, I had the craziest

dream of being Lex Reed, amazing.

It’s a nightmare.

Yeah.

Who knows?

It’s a ride.

Right?

But, you see, you know, it’s not clear at all that the physical poking is your primary

reality.

That’s not clear to me at all.

I don’t know.

That’s an obvious thing that a lot of people can show is true, to take a step to the cart.

I think, therefore, I am, that’s the only thing you know for sure, and everything else

could be an illusion or a dream.

That’s already a leap.

I think, from a basic caveman science perspective, the repeatable experiment is the one that most

of intelligence comes from here.

The reality is exactly as it is.

To take a step towards the Donald Hoffman worldview takes a lot of guts and imagination and stripping

away of the ego and all these kinds of processes.

I think you can get there more easily by synthetic bioengineering in the following sense.

Do you feel a lack of x-ray perception?

Do you feel blind in the x-ray spectrum or in the ultraviolet?

I mean, you don’t.

You have absolutely no clue that stuff is there.

And all of your reality as you see it is shaped by your evolutionary history.

It’s shaped by the cognitive structure that you have, right?

There are tons of stuff going on around us right now of which we are completely oblivious.

There’s equally all kinds of other stuff which we construct.

And this is just modern cognitive science that says that a lot of what we think is going on

is a total fabrication constructed by us.

So I think this is not a…

I don’t think this is a…

I mean, Descartes got there from a philosophical point.

That’s not the leap I’m asking us to make.

I’m saying that depending on your embodiment, depending on your interface, and this is increasingly

going to be more relevant as we make first augmented humans that have sensory substitution.

You’re going to be walking around.

Your friend’s going to be like, oh, man, I have this primary perception of the solar weather

and the stock market because I got those implants.

And what do you see?

Well, I see the traffic of the internet through the trans-Pacific channel.

We’re all going to be living in somewhat different worlds.

That’s the first thing.

The second thing is we’re going to become better attuned to other beings, whether they

be cells, tissues, you know, what’s it like to be a cell living in a 20,000 dimensional

transcriptional space, okay?

To novel beings that have never been here before that have all kinds of crazy spaces that they

live in.

And that might be AIs, it might be cyborgs, it might be hybrids, it might be all sorts

of things.

So this idea that we have a consensus reality here that’s independent of some very specifically

chosen aspects of our brain and our interaction, we’re going to have to give that up no matter

what to relate to these other beings.

I think the tension is, and absolutely, and this idea that you’re talking about of sort

of almost, I think you’ve termed it cognitive prosthetics, which is different ways of perceiving

and interacting with the world.

But I guess the question is, is our human experience, the direct human experience, is that just a

slice of the real world or is it a pointer to a different world?

That’s what I’m trying to figure out.

Because the claim you’re making is a really fascinating one, compelling one.

There’s a pretty strong one, which is there’s another world into which our brain is an interface

too, which means you could theoretically map that world systematically.

Yeah, which is exactly what we’re trying to do.

Right, right.

But it’s not clear that that world exists.

Yeah, yeah.

Okay.

I mean, so that’s the beautiful part about this.

And this is why I’m talking about this now, where I wasn’t, you know, about a year ago,

up until a year ago, I was never talking about this.

Because I think this is now actionable.

So there’s this diagram that’s called a map of mathematics.

And they basically try to show how all the different pieces of math link together.

And there’s a bunch of different versions of it.

So there’s two features to this.

One is that, what is it a map of?

Well, it’s a map of various truths.

It’s a map of facts that are thrust on you.

You don’t have a choice.

Once you’ve picked some axioms, you just, you know, here’s some surprising facts that are

just going to be given to you.

But the other key thing about this is that it has a metric.

It’s not just a random heap of facts.

They are all connected to each other in a particular way.

They literally make a space.

And so when I say it’s a space of patterns, what I mean is it is not just a random bag of

patterns such that when you have one pattern, you are no closer to finding any other pattern.

I’m saying that there is some kind of a metric to it so that when you find one, others are

closer to it and then you can get there.

So that’s the claim.

And obviously this is not, not everybody buys this and so on.

This is one idea.

Now, how do we know that this exists?

Well, I’ll say a couple of things.

If that didn’t exist, what is that a map of?

If there is no space, if, if, if you don’t want to call it a space, that’s okay.

But you can’t get away from the fact that as a matter of research, there are patterns

that relate to each other in a particular way.

What, what, what, what’s, you know, the final step of calling it a space is minimal.

The bigger, the bigger issue is what the hell is it a map of then if it’s not a space?

So that’s, so that’s the first thing.

Now that that’s, that’s how it plays out.

I think in math and physics.

Now in biology, here’s, here’s how we’re going to know if this makes any sense.

What we are doing now is trying to map out that space by saying, look, we took, we, we, we,

we know that, that the frog genome maps to one thing and that’s a frog.

Turns out that exact same genome, if you, if you just, if you just take a, take the slightest

step with the exact same genome, but you just take some cells out of that environment, they

can also make xenobots with very specific, different transcriptomes, very specific behaviors,

very specific shapes.

It’s not just, oh, well, you know, they do whatever.

And then they have very specific behaviors, just like the frog had very specific properties.

We can start to map out what all those are, right?

Make that late, basically try to, try to draw the latent space of, from which those things

are pulled.

And one of two things is going to happen in the future.

And so this is, you know, come back in 20 years and we’ll, and we’ll see how this worked

out.

And one thing that could happen is that we’re going to see, oh yeah, just like the map of mathematics,

we, we, we made the, we made a map of the space and we know now that if I want a system that

acts like this and this, here’s the kind of body I need to make for it, because those are

the patterns that exist.

The anthropots have four different behaviors, not seven and not one.

And so that’s what I can pull from.

These are the options I have.

Is there, is it possible that there’s varying degrees of grandeur to the, to the space that

you’re thinking about mapping?

Meaning it could be just, just like with the space of mathematics, might it strictly be

just the space of biology versus a space of like minds, which feels like it could encompass

a lot more than the just biology.

Yeah.

Except that I don’t see how, I don’t see how it would be separate because I’m not just

talking about an anatomical shape and transcriptional profile.

I’m also talking about behavioral competencies.

So when we make something and we find out that, okay, it does habituation sensitization, it does

not do Pavlovian conditioning and it does do delayed gratification and it doesn’t have language.

That is a very specific cognitive profile.

That’s a region of that space.

And there’s another region that looks different because I don’t make a sharp distinction between

biology and cognition.

If you want to explain behaviors, they are drawn from some distribution as well.

So, so I think in 20 years or however long it’s going to take, one of two things will happen.

Either we and other people who are working on this are going to actually produce a map of that space

and say, here’s why you’ve gotten systems that work like this and like this and like this,

but you’ve never seen any that work like that, right?

Or we’re going to find out that I’m wrong and that basically it’s not worth calling it a space

because it is so random and so jumbled up that there is, we’ve been able to make zero progress

in linking the, uh, the embodiments that we make to the patterns that come through.

Yeah.

Just, just to be clear, I mean, from your, uh, blog post on this, from the paper,

I mean, we’re talking about a space that includes a lot of stuff.

Yeah.

Yeah.

Includes human.

What is it?

Meditating Steve.

Hello.

My name is Steve AI system.

So all the space of computational systems, objects, biological systems, concepts, it includes everything.

It includes specific patterns that we have given names to.

Right.

Some of those patterns we’ve named mathematical objects.

Some of those patterns we made, we’ve named anatomical outcomes.

Some of those patterns we’ve made psychological types.

So every entry in an encyclopedia, old school Britannica, is a pointer to this space.

There is a set of things that I feel very strongly about because the research is telling us that’s what’s going on.

And then there’s a bunch of other stuff that I see as hypotheses for next steps that guide experiment.

So what I’m about to tell you, I don’t, you know, these are things I don’t actually know.

These are just, uh, guesses that, uh, that, uh, you know, you need to make some guesses to make progress.

I, I, I don’t think that there are specific, or I don’t know, but it doesn’t mean that there are going to be specific Latonic patterns for this is

the Titanic and this is the sister of the Titanic and this is some other kind of boat.

This is not what I’m saying.

What I’m saying is in some way that we absolutely need to work out when we make minimal interfaces, we get more than we, than we put in.

We get behaviors, we get shapes, we get mathematical truths, and we get all kinds of patterns that we did not have to create.

We didn’t micromanage them.

We didn’t know they were coming.

We didn’t have to put any effort into making them.

They come from some distribution that seems to exist that we don’t have to create.

And exactly whether that space is sparse or dense, I don’t know.

So, for example, if there is, um, you know, some kind of, um, uh, you know, a platonic form for the movie, The Godfather, if it’s surrounded by a

bunch of crappy versions and then crappier versions still, I have no idea, right?

I don’t know if the space is sparse or not.

Um, I, you know, I don’t know if, uh, if it’s finite or infinite.

These are all things I don’t know.

What I do know is that it seems like physics and for sure biology and cognition are the benefits of ingressions that are, are free lunches in some

sense.

We, we did not make them calling them emergent.

It does nothing for a research program.

Okay.

That just means you got surprised.

I, I think, I think it’s much better if you say, if you make the optimistic assumption that they come from a structured space, that we have a prayer

and hell of actually exploring.

And in some decades, if I’m wrong and it says, you know what, we tried, it looks like it really is random too bad.

Fine.

Is there a difference between like, can we one day prove the existence of this world is, and is there a difference between it being a really

effective model for connecting things, explaining things versus like an actual place.

Where the information about these distributions that we’re sampling actually exists, that we can hit with a stick.

You can, yeah, you can, you can try to make that distinction, but I think, I think modern cognitive neuroscience will tell you that whatever you

think this is at, at most, it is a very effective model for predicting the future experiences you’re going to have.

So all of this that we think about as physical reality is just as a nice convenient model.

I mean, that’s not me, that’s predictive processing and active infrared, like that’s modern neuroscience telling you this, that, that this isn’t

anything that I, that I’m particularly coming up with.

All I’m saying is the distinction, the distinction you’re trying to make, which is like an old school, like realist, you know, kind of view that is

it, is it, is it, is it, is it metaphorical or is it real?

All we have in science, all we have in science, all we have in science, all we have in science are metaphors, I think.

And the only question is how good are your metaphors?

And I think as agents act, living in a world, all we have are models of what we are and what the outside world is.

That’s it.

And my claim about this is in some small number of decades, either this will either give rise to a very enabling mapping of the space for, for AI,

for bioengineering, for, you know, biology, whatever.

Or we are going to find out that it really sucks because it really is a random grab bag of stuff and we tried the optimistic research program, it

failed, and we’re just going to have to live with surprise.

I mean, I doubt that’s going to happen, but it’s a possible outcome.

But do you think it’s, there is some place where the information is stored about these distributions that are being sampled to the thin interfaces,

like actual place?

Place is weird because it isn’t the same as our physical space time.

Okay, I don’t think it’s that.

So calling it a place is a little, a little weird.

No, but like physics, general relativity describes a space time.

Could other physics theories be able to describe this other space where information is stored that we can apply, maybe different, but in the same

spirit, laws about information?

Yes, I definitely think they’re going to be systematic laws.

I don’t think they’re going to look anything like physics.

You can call it physics if you want, but I think it’s going to be so different.

And that probably just, you know, cracks the word.

And whether information is going to survive that, I’m not sure.

But I, but I definitely think that it’s going to be, there are going to be laws.

But I think they’re going to look a lot more like aspects of, of, of psychology and cognitive science than they’re going to look like physics.

That’s my guess.

So what does it look like to prove that world exists?

What it looks like is a successful research program that explains how you pull particular patterns when you need them and why some patterns come and

others don’t and show that they come from an ordered space.

Across a large number of organisms?

Well, it’s not just organisms.

I mean, I think, I think it’s going to end up.

And I mean, you can talk to the machine learning people about how they got to this point again, because this is, this is not just me.

There’s a bunch of, there are a bunch of different disciplines that are converging on this now simultaneously.

You’re going to, you’re going to find, again, just like in mathematics, where from, from, from, from different directions, everybody sort of is

looking at different things.

Oh my God, this is one underlying structure that seems to like inform all of this.

So in, in physics, in mathematics, in computer science, machine learning, possibly in economics, certainly in biology, possibly in, you know,

cognitive science, we’re going to find these structures.

It was already obvious in Pythagoras this time that, that there are these patterns.

The only remaining question is, are they part of an ordered structured or, you know, space?

And are we up to the task of mapping out the relationship between what we build and the patterns that come through it?

So from the machine learning perspective, is it then the case that even something as simple as LLMs are sneaking up onto this world?

That the representations that they form are sneaking up to it?

Well, when I, I’ve given, I’ve given this talk to, to, to some audiences and especially in the organicist community, people like the first part where

it’s like, okay, now there’s an idea for what the magic quote unquote is.

That’s, that’s, that’s special about the living things and so on.

Now, now, if we could just stop there, we would have dumb machines that just do what the algorithm says.

And we have these magical living interfaces that can be the recipient for these ingressions.

Cool, right?

We can cut up the world in this way.

Unfortunately, or, or fortunately, I think that’s not the case.

And I think that even, even simple, uh, minimal computational models are to some extent beneficiaries of these free lunches.

I think that, um, the theories we have, and this, this goes back to the, to the thin client interface kind of idea, the theories we have of both of

physics and computation.

So theory of algorithms, you know, Turing machines, all that, all that good stuff, those are all good theories of the front end interface.

And they’re not complete theories of the whole thing.

They capture the front end, which is why they get surprised, which is why these things are surprising when they happen.

I think that when we see embryos of different species, we are pulling from well-trodden, familiar regions of that space.

And we know what to expect, frog, uh, you know, snake, whatever.

When we make cyborgs and hybrots and biobots, we are pulling from new regions of that space that look a little weird and they’re unexpected, but you

know, we can still kind of get our, get our mind around them.

When we start making AIs, like proper AIs, we are now fishing in a region of that space that we may, that, that may never have had bodies before.

It may have never been embodied before.

And what we get from that is going to be extremely surprising.

And, um, the final, um, thing, uh, just to mention on that is that because of this, because of the inputs from this platonic space, some of the

really interesting things that, um, artificial constructs can do are not because of the algorithm.

They’re in spite of the algorithm.

They are filling up the spaces in between.

There’s what the algorithm is forcing you to do.

And then there’s the other cool stuff it’s doing, which is nowhere in the algorithm.

And if that’s true, and we think it’s true, even a very minimal systems, then this whole business of, of, um, of language models and AIs in general,

watching the language part may be a total red herring because the language is what we force them to do.

The question is what, what, what else are they doing that we are not, we are not good at noticing.

And this is, you know, this, this, this, this is something that we are, I think, um, as a, as a kind of, uh, existential, um, step for humanity is to

, is to become better at this because we are not good at recognizing these things.

Now you got to tell me more about, uh, this behavior that is observable, that is unrelated to the explicitly stated goal of a particular algorithm.

So you looked at a simple algorithm of, uh, sorting.

Can you explain what was done?

Sure.

First, just the goal of the study.

There are two things that people generally assume.

One is that we have a pretty good intuition about what kind of systems are going to have, uh, competencies.

So from observing biologicals, we’re not terribly surprised when biology does interesting things.

Everybody always says, well, it’s biology.

You know, of course it does all this cool stuff.

And yeah, but, but we have these machines and the whole point of having machines and dumb and algorithms and so on is they do exactly what you tell

them to do.

Right.

And people feel pretty strongly that that’s a binary distinction and that that’s what, uh, that’s, we can carve up the world in that way.

So I, I wanted to do two things.

I wanted to, first of all, explore that and hopefully break the assumption that we’re good at seeing this because I think we’re not.

And I think it’s extremely important that we understand very soon that, uh, we need to get much better at, uh, at, at knowing when to, uh, when to

expect these things.

And the other thing I wanted to do was to find out, uh, you know, most, mostly people assume that you need a lot of complexity for this.

So when somebody says, well, the capabilities of my mind are not, uh, properly, um, encompassed by the rules of biochemistry, everybody’s like, yeah,

that makes sense where, you know, you’re very complex and okay.

You’re, you know, your mind does things that, that you can’t, you could, you didn’t see that coming from the rules of biochemistry, right?

Like we, we know that.

Um, so mostly people think that has to do with complexity and, and what I would like to find out is as, as part of understanding what kind of

interfaces give rise to what kind of ingressions, is it really about complexity?

How much complexity do you actually need?

Is there some threshold after which this happens?

Is it really specific materials?

Is it biologicals?

Is it something about evolution?

Like, what is it about these kinds of things that allows this, this, this surprise, right?

Allows this idea that we are more than the sum of our parts.

And so, and, and, and I had a strong intuition that none of those things are actually required, that this is this kind of magic, so to speak, seeps

into pretty much everything.

And, uh, and so to, to look at that, I wanted also to, uh, have an example that had significant shock value, because the thing with biology is there

‘s always more mechanism to be discovered, right?

Like there’s infinite depth of what the materials are doing, what the, you know, somebody will always say, well, there’s a, there’s a mechanism, but

I just haven’t found it yet.

So I wanted an example that was simple, transparent.

So you could see all the stuff.

There was nowhere to hide.

I wanted it to be deterministic because I don’t want it to be something around unpredictability or stochasticity.

And, uh, and I wanted to be, uh, something familiar to people, minimal, and I wanted to use it as a model system for honing our abilities to take a

new system and looking at it with fresh eyes.

And that’s because the sorting algorithms have been studied for over 60 years.

We all think we know what they do and what their properties are.

The algorithm itself is just a few lines of code.

You know, you can, you can see exactly what’s there.

It’s deterministic.

And that’s, that’s, so that, that’s why, that’s why, right.

I wanted, I wanted the most shock value out of a system like that.

If we were to find anything and to use it as an example of taking something minimal and, and, and seeing what can be gotten out of it.

So I’ll, I’ll describe two interesting things about it.

And then we have lots of other work coming, uh, in the next, in the next year, but even simpler systems.

I mean, it’s actually crazy.

Um, so the, so the very first thing is this, the standard sorting.

So let’s, let’s take bubble sort, right.

And, and, and all these sorting algorithms, you know, what you’re starting out with is an array of, uh, jumbled up digits.

Okay.

So integers, it’s an array of mixed up integers.

And what the algorithm is designed to do is to eventually arrange them all into order.

And what it does generally is compare some pieces of that array.

And, and based on which one is larger than which, it swaps them around.

And you can imagine that if you just keep doing that and you just keep comparing and swapping,

then eventually you can get all the digits in the same order.

So the first thing I decided to do, and this is, uh, this is the work of, uh, my student

Haining Zhang and then Adam Goldstein on this paper.

This goes back to our original discussion about putting a barrier between it and its goals.

And the first thing I said, okay, how do, how do we put a barrier in?

Well, how about this?

The traditional algorithm assumes that the hardware is working correctly.

So if you have a seven and then the five and you tell them to swap the, the, the lines

to swap the, the swap, the five and the seven, and then you go on, you never check.

Did it swap?

Because you assume that, that, that it’s reliable hardware.

Okay.

So what we decided to do was to break one of the digits so that it doesn’t move.

When you tell it to move, it doesn’t move.

We don’t change the algorithm.

That’s really key.

We do not put anything new in the algorithm that says, what do you do if the damn thing

didn’t move?

Okay.

Just run it exactly the same way.

What happens?

Turns out something very interesting happens.

It still works.

It’s still, so it’s still sorts it.

Uh, but it, it eventually sorts sorts it by moving all the stuff around the broken number.

Okay.

That makes sense.

But here’s something interesting.

Suppose we, suppose we plot at any given moment, we plot the degree of sortedness of the string

as a function of time.

If you run the normal algorithm and sort it gets, it’s guaranteed to get where it’s going.

That’s the, you know, it’s gotta, it’s gotta sort and it will always reach the end.

But when it encounters one of the broken digits, what happens is the actual sortedness goes

down in order to then recoup and get better order later.

What it’s able to do is to go against the thing that it’s trying to do to go around in order

to meet its goal later on.

Now, if I didn’t, if, if, if I showed this to a behavior scientist and I didn’t tell them

what this, what, what system was doing is they will say, well, we know what this is.

This is delayed gratification.

This is the ability of a system to go against its gradient and get what it needs to do.

Now, imagine two magnets.

Imagine you take two magnets and you put a piece of wood between them and they’re like

this.

What the magnet is not going to do is to go around the barrier and get to its goal.

The two, they’re not smart enough to go against their gradient.

They’re just going to keep doing this.

Some animals are smart enough, right?

They’ll go around and the sorting algorithm is smart enough to do that.

But the trick is there are no steps in the algorithm for doing that.

You could stare at our algorithm all day long.

You would not see that this thing can do delayed gratification.

It isn’t there.

Now, there’s two ways to look at this.

On the one hand, you could say, so the, the, the reductionist physics approach, you could

say, did it, did it follow all the steps in the algorithms?

Yeah, it did.

Well then, uh, well, there’s nothing to see here.

There’s no magic.

This is, you know, it, it does what it does.

It didn’t, it didn’t disobey the algorithm, right?

I’m not claiming that this is a miracle.

I’m not saying it disobeys the algorithm.

I’m saying it’s not failing to sort.

I’m saying it’s not doing some sort of, you know, crazy quantum thing.

Not saying any of that.

What I’m saying is other people might call it emergent.

What it has is our properties that are not complexity, not unpredictability, not perverse

instantiation as in sometimes in, in a life.

What it has are unexpected competencies recognizable by behavioral scientists, meaning, meaning different

types of cognition, primitive, well, we want it primitive.

So there you go.

It’s simple, uh, that you didn’t have to code into the algorithm.

That’s very important.

You get more than you start with you, then, then you put in, you didn’t have to do that.

You get these surprising behavioral competencies, not just complexity.

That’s the first thing.

The second thing, which, which is also crazy, but it requires a little bit of a little bit

of explanation.

The second thing that we said is, okay, what if instead of in the typical sorting algorithm,

you have a single controller top down.

I’m, I’m sort of God, like looking down at the numbers and I’m swapping them according

to the algorithm.

What if, and this goes back to actually the title of the paper talks about a gentle data

self sorting algorithms.

This is back to like, what’s, who’s the pattern and who’s the agent, right?

He said, what if we give the numbers a little bit of agency?

Here’s what we’re going to, we’re not going to have any kind of top down sort.

Every single number knows the algorithm and he’s just going to do whatever the algorithm

says.

So if I’m a five, I’m just going to execute the algorithm and the algorithm will try to make

sure that to my right is the six and to my left is a four.

That’s it.

That’s it.

So every digit is, so it’s like a distributed as, you know, it’s like an ant colony.

There is no central planner.

Everybody just does their own algorithm.

Okay.

We’re just going to do that once you’ve done that.

And by the way, one of the values of doing that is that you can simulate biological processes

because in biology, you know, if I have like a frog face and I scramble it with all the

different organs, every, every tissue is going to rearrange itself so that ultimately you

have, you know, nose, eyes, head, you know, you’re going to have an order.

Right.

So you can do that, but, um, okay, fine, but you can do something else.

Cool.

Once you’ve done that, you can do something cool that you can’t do with a standard algorithm.

You can make a chimeric algorithm.

What I mean is not all the cells have to follow the same algorithm.

Some of them might follow bubble sorts.

Some of them might follow selection sort.

It’s like in biology.

What we do when we make chimeras, we make frogolotls.

So frogolotls have some frog cells.

They have some axolotl cells.

What is that going to look like?

Does anybody know what a frogolotl is going to look like?

It’s actually really interesting that despite all the genetics and the, and the developmental

biology, you have the genomes, you have the frog genome, you have the axolotl genome.

Nobody can tell you what a frogolotl is going to look like, even though you have, you have,

this is, this is the, this is the back to your question about physics and chemistry.

Like, yeah, you can know everything there is to know about how, you know, how the physics

and the, and the genetics work, but the decision-making, right?

Is like baby, baby axolotls have legs.

Tadpals don’t have legs.

Is a frogolotl going to have legs, right?

Can you predict that from, from understanding the physics of transcription and all of that?

Anyway, so, so we made some, uh, so, so you see, you see, this is like an intersection

of biology, physics, cognition.

So we made chimeric algorithms and we said, okay, half the digits randomly, we assign them

randomly.

So half the digits are randomly doing bubble sort.

Half the digits are randomly doing, I don’t know, selection sort or something.

But that, once you choose bubble sort, that digit is sticking with bubble sort.

It’s sticking.

We haven’t done the thing where they can swap between, no, they’re sticking to it, right?

You label them and they’re sticking to it.

The first thing we learned is that, the first thing we learned is that distributed sorting

still works.

It’s amazing.

You don’t need a central planner when, when every, when every number is doing its whole thing,

still gets sorted.

That’s cool.

The second thing we found is that when you make a chimeric algorithm,

where actually the algorithms are not even matching, that works too.

The thing still gets sorted.

That’s cool.

But the most amazing thing is when we looked at something that had nothing to do with sorting,

and that is, we asked the following question.

We defined, Adam Goldstein actually named this property, and I think it’s, it’s well-named.

We defined the algotype of a single cell.

It’s not the genotype.

It’s not the phenotype.

It’s the algotype.

The algotype is simply this.

What algorithm are you following?

Which one are you?

Are you a, are you a selection sort or a bubble sort?

Right?

That’s it.

There’s two algotypes.

And we simply asked the following question.

During that process of sorting, what are the odds that whatever algotype you are, the guys

next to you are your same type?

It’s, it’s, it’s not the same as asking how the numbers are sorted because it’s got nothing

to do with the numbers.

It’s actually, it’s just whatever type you are.

It’s more about clustering than sorting.

Clustering.

Well, that’s exactly what we call it.

We call it clustering.

And at first, so, so now think of what happens.

And that’s, and you can see this on that graph.

It’s the red.

You start off, the clustering is at 50% because as I told you, we assign the algotypes

randomly.

So the odds that the guy next to you is the same as you is half, 50%.

You know, there’s only two algotypes.

In the end, it is also 50% because the thing that dominates is actually the sorting algorithm

and the sorting algorithm doesn’t care what type you are.

You’ve got to get the numbers in order.

So by the time you’re done, you’re back to random algotypes because, because you have

to get the numbers sorted.

But in between, in between, you get some amount of increased, very significant, because look

at, look at the control is in the middle.

The pink is in the middle.

In, in between, you get significant amounts of clustering.

Meaning that certain algotypes like to hang out with their buddies for as long as they

can.

Now, now here’s, here’s the one more thing.

And then I’ll kind of give a philosophical significance of this.

And so we saw this and I said, that’s nuts because the algorithm doesn’t have any provisions

for asking what algotype am I, what algotype is my, is my neighbor.

If we’re not the same, I’m going to move to be next to like, if you wanted to implement

this, you would have to write a whole bunch of extra steps.

There would have to be a whole bunch of observations that you would have to take of your neighbor

to see how he’s acting.

Then you would infer what algotype he is.

Then you would go stand next to the one that seems to have the same algotype as you.

You would have to take a bunch of measurements to say, wait, is that guy doing bubbles or is

he doing selections?

All right.

Like if you want to implement this, it’s a whole bunch of algorithmic steps.

None of that exists in our algorithm.

You don’t have any way of knowing what algotype you are or what anybody else is.

Okay.

We didn’t have to pay for that at all.

So notice, notice a couple of interesting things.

The first interesting thing is that this was not at all obvious from the algorithm itself.

Algorithm doesn’t say anything about algotypes.

Second thing is we paid computationally for all the steps needed to have the numbers sorted,

right?

Because we know, you know, you pay for a certain computation cost.

The clustering was free.

We didn’t pay for that at all.

There were no extra steps.

So this gets back to your other question of how do we know there’s a platonic space?

And this is kind of like one of the craziest things that we’re doing.

I actually suspect we can get free compute out of it.

I suspect that one of the things that we can do here is use these ingressions in a useful

way that don’t require you to pay costs, to pay physical costs, right?

Like we know every, every bit has a, has an energy cost that you have to get.

The clustering was free.

Nothing extra was done.

Yeah.

Just, uh, the, this plot for people who were just listening on the x-axis is the percentage

of completion of the sorting process.

And the y-axis is the sortedness of the list of numbers.

And then also in the red line is basically the degree to which they’re clustered.

And, uh, you’re saying that there’s this unexpected competence of clustering.

And I should comment that I’m sure there’s a theoretical computer scientist listening to this

saying I can model exactly what is happening here and prove that the clustering increases

and decreases.

So taking the specific instantiation of the thing you’ve experimented with and, and prove

certain, uh, properties of this.

But the point is that there’s a more general pattern here of probably other that you haven’t

discovered unexpected competencies that emerge from this that you can, could get free computation

out of this thing.

So this goes back to the very first thing you said about, uh, physicists thinking that

physics is enough.

You’re a hundred percent correct that somebody could look at this and say, well, I see exactly

why this is happening.

We can track, we can track through the algorithm.

Yeah, you can.

There’s no miracle going on here, right?

I’m not, the hardware isn’t doing some crazy thing that it wasn’t supposed to do.

The point is that despite following the algorithm to do one thing, it is also at the same

time doing other things that are neither prescribed nor forbidden by the algorithm.

It’s the space between, between, uh, the, the chance and necessity, which is how a lot of

people, you know, see these things.

It’s that, it’s that free space.

We don’t really have a good vocabulary for it where the interesting things happen.

And to whatever extent it’s doing other things that are useful, that stuff is, is, is computationally

without extra cost.

Now there’s one other cool thing about this.

And this is the beginning of a lot of, um, thinking that I’ve done about, um, when this,

this relates to AI and stuff like that, intrinsic motivations, the sorting of the digits

is what we forced it to do.

The clustering is an intrinsic motivation.

We didn’t ask for it.

We didn’t expect it to happen.

We didn’t, uh, we didn’t explicitly forbid it, but we didn’t, you know, we didn’t know.

This is a great definition of the intrinsic motivation of a system.

So when people say, oh, that’s a machine, it only does what you programmed it to do.

I, you know, I, as a human have intrinsic motivation, you know, uh, I’m creative and I have

intrinsic motivation.

Machines don’t do that.

Even, even, even this minimal thing has a minimal kind of intrinsic motivation, which is something

that is not forbidden by the algorithm, but isn’t prescribed by the algorithm either.

And I think, I think that’s an important, you know, third thing besides chance and necessity.

Something, something else that’s, that’s fun about this is, uh, when you think about intrinsic

motivations, think about, think about a child.

Uh, if you make him sit in math class all day, you’re never going to know what the other intrinsic

motivations are that he might be doing, right?

Who knows what else he might be interested in.

So we, so I wanted to ask this question, I want to say, if we let off the pressure

on the sorting, what would happen now?

That’s hard because, because if you mess with the algorithm, now it’s no longer the same

algorithm.

So you don’t want to do that.

So we did something that I think was, it was kind of clever.

We allowed repeat digits.

So if you allow repeat digits in your, in your array, you can still have all the fives can

still be after all the fours and after all the sixes, but you can keep them as clustered

as you want.

So this thing at the end where they have to get declustered in order for the sorting

to happen, we thought maybe we could let off the pressure a little bit.

If you do that, all you do is allow some extra repeat digits.

The clustering gets bigger.

It will cluster as much as you let it.

The clustering is what it wants to do.

The sorting is what we’re forcing it to do.

And my only point is if, if the, if the bubble sword, which has been gone over and gone over

how many times has these kinds of things that we didn’t see coming.

What about the AIs, the language model, everything else, not because, not because they talk, not

because they say that they’re, you know, have an inner perspective or any of that, but just

from the fact that this thing is even, even the most minimal system surprises with what

happens.

And I, frankly, when I see this, tell me if this doesn’t sound like all of our existential

story for the brief time that we’re here, the universe is going to grind us into dust

eventually, but until then we get to do some cool stuff that is intrinsically motivating

to us, that is neither forbidden by our, by the laws of physics, nor determined by the

laws of physics, but eventually it kind of comes to an end.

So I think that, that aspect of it, right, that there are spaces, even in algorithms, there

are spaces in which you can do other new things, not just random stuff, not just complex

stuff, but things that are easily recognizable to a behavior.

You see, that’s the point here.

And I think that kind of intrinsic motivation is what’s telling us that this idea that we

can carve up the world, we can say, okay, look, biology is complex, cognition, who knows

what’s responsible for that, but at least we can take a chunk of the world aside and we

can, we can cut it off and we can say, these are the dumb machines.

These are just this algorithms.

Whereas we know the rules of biochemistry don’t explain everything we want to know about how

psychology is going to go, but at least the rules of algorithms tell us exactly what the

machines are going to do, right?

We have, we have some hope that we’ve, we’ve carved off a little part of the world and everything

is nice and simple.

And it is exactly what we said it was going to be.

I think that failed.

I think it was a good try.

I think we have good theories of interfaces, but even, even the simplest algorithms have,

have these kinds of things going on.

And, and so that’s, that, that’s why I think something like this is significant.

Do you think that there is going to be in all kinds of systems of varying complexity,

things that the system wants to do and things that it’s forced to do?

So are there these unexpected competencies to be discovered in basically all algorithms and

all systems?

That’s my suspicion.

And I think that is extremely important for us to, as humans to have a research program,

to learn, to recognize and predict and recognize we make things, nevermind something as simple

as this.

We make, we make, you know, social structures, financial structures, internet of things, robotics,

we make all this stuff.

And we think that the thing we make it do is the main show.

And I, I think it is very important for us to learn to recognize the, the, the kind of

stuff that, that sneaks in into the spaces.

What, what, it’s a very counterintuitive notion.

By the way, I like the word emergent.

I hear your criticism and it’s a really strong one that emergent is like you toss your hands

up, but I don’t know the, the process, but it’s just a beautiful word because it is,

I guess it’s a synonym for surprising.

And I mean, this is very surprising, but just because it’s surprising doesn’t mean there’s

not a mechanism that explains it.

Mechanism and explanation are both, uh, not all they’re cracked up to be in the sense that,

you know, anything you and I do, we could, we could come up with the most beautiful theory.

We paint a painting, anything we do, somebody could say, well, I was watching the biochemistry

and the, and the, and the Schrodinger equation playing out.

And it was, it totally described everything that was happening.

You didn’t break, you didn’t break even a single law of biochemistry, nothing to see

here, nothing to see, right?

Like, okay.

You know, consistent with the, with the low level rules.

You can do the same thing here.

You can look at the machine code and say, yeah, this thing is just executing machine code.

You can go further and say, oh, it’s cool.

It’s, it’s quantum foam.

It’s just doing the thing that quantum foam does.

That you’re saying that’s what physicists miss.

And I’m not saying they’re unaware of that.

I mean, they’re generally a pretty sophisticated bunch.

I just think they’ve picked a level and they’re going to discover what is to be seen at that

level, which is a lot.

And my point is the stuff that the, the, the behavior scientists are interested in shows up

at a much lower level than you think.

How often do you think there’s a misalignment of this kind between the thing that a system

is forced to do and what it wants to do?

And it’s particularly, I’m thinking about various levels of complexity of AI systems.

So right now we’ve looked at like five other systems.

That’s a small N.

Okay.

But, but just looking at that, I would find it very surprising if bubble sort was able to

do this and then there was some sort of valley of death where nothing showed up and then

blah, blah, living things.

Like, I can’t imagine that.

I, I’m going to say that if something, and we, and we actually have a system that’s even

simpler than this, which is one D cellular automata that’s doing some weird stuff.

If, if these things are to be found in this kind of simple system, I mean, they just have

to be showing up in, in these other more complex AIs and things like that.

The only thing, what, what we don’t know, but we’re going to find out is to what

extent there is interaction between these.

So I call these things side quests, you know, it’s like, they’re like, like, like in a game,

you know, whether it’s the main thing you’re supposed to do.

And then as long as, as long as you still do it, the thing about this is you have to

sort.

Yeah.

Yeah.

Do you have to sort?

There’s no miracle.

You’re going to sort, but, but, but as long as you can do other stuff while you’re sorting,

it’s not forbidden.

And what we don’t know is to what extent are the two things linked?

So if you do have a system that’s very good at language, are the, are the others, the,

the, the, the side quests that it’s capable of, do they have anything to do with language

whatsoever?

The, the, we don’t know the answer to that.

The answer might be no.

In which case, all of the stuff that we’ve been saying about language models, because of

what they’re saying, all of that could be a total red herring and not really important.

And the really exciting stuff is what we never looked for.

Or in complex systems, maybe those things become linked in biology, they’re linked in biology.

Evolution makes sure that, that the things you’re capable of have a lot to do with what

you’ve actually been selected for in these things.

I don’t know.

And so we might find out that, that they actually do give the language some sort of leg up, or

we might find that through the language is, is just, uh, you know, that’s not, that’s

not the interesting part.

Also, it is an interesting question of, um, this intrinsic motivation of clustering.

Is this a property of the particular sorting algorithms?

Is this a property of all sorting algorithms?

Is this a property of all algorithms operating on lists on numbers?

How big is this?

So for example, with LLMs, is it a property of any algorithm that’s trying to model language

or is it very specific to transformers?

And that’s all to be discovered.

We’re doing all that.

We’re doing all that.

We’re testing, we’re testing the stuff in other algorithms.

We’re looking for, we’re developing suites of code to look for other properties.

We, you know, to some extent it’s very hard because we don’t know what to look for, but

we do have a behaviorist handbook, which tells you what the, the, all kinds of things to look

for, the, the, the delay gratification, the, you know, uh, problem solving, like we have,

we have all that.

I’ll tell you an end of one of an interesting biological intrinsic motivation because, because

people, so, so, so in, in like the alignment community and stuff, there’s a lot of discussion

about like, what are the intrinsic motivations going to be of AIs?

What are their goals going to be, right?

They will, what are they going to want to do?

Uh, just, just as an end of one observation, anthrobots, the very first thing we checked

for, so this is not experiment number 972 out of a thousand things.

This is the very first thing we checked for.

We put them on a plate of neurons with a big wound through them, a big scratch.

First thing they did was heal the wound.

Okay.

So it’s an end of one, but I, I, I liked the fact that the first intrinsic motivation that

we noticed out of that system was benevolent and healing.

Like I thought that was pretty cool.

And we don’t know, maybe the, you know, maybe the next 20 things we find are going to be

some sort of, you know, damaging effects.

I can’t tell you that, but, but the first thing that we saw was, was kind of a positive

one.

And, and I don’t know, that makes me feel better.

What was the thing you mentioned with the anthrobots that they can reverse aging?

There’s a procedure called an epigenetic clock, where what you can do is look at a particular

epigenetic states of cells and compared to a curve that was built from humans of known

age, you can guess, you can guess what the age is.

Okay.

So, so, so we can take now, and this is Steve Havrath’s work and many other people that when

you take a set of cells, you can guess what their biological age is.

Okay.

So we make the anthrobots from cells that we get from human tracheal epithelium.

We collaborated with, with Steve’s group, the clock foundation.

We sent them a bunch of cells and we saw that if you, if you check the, the anthrobots

themselves, they are roughly 20% younger than the cells they come from.

And so that’s amazing.

And I can, I can give you a theory of why that happens, although we’re still investigating.

And then I can tell you the implications for longevity and things like that.

My theory for why it happens.

I call this, uh, uh, I call this, uh, age evidencing.

And I think that what’s happening here, like with a lot of biology is that cells have to

update their priors based on experience.

And so I think that they come from an old body.

They have a lot of priors about how many years they’ve been around and all that, but their

new environment screams.

Basically, there’s no other cells around.

You’re being bent into a pretzel.

They actually express some embryonic genes.

They say you’re, you’re, you’re an embryo.

And I think it doesn’t, it’s not enough new evidence to roll them like all the way back,

but it’s enough to update them to about 28% back.

Yeah.

So it’s similar to like, uh, when older adult gives birth to a, a child.

So you’re, you’re, you’re saying you can just fake it till you make it with, uh, with age,

like the environment convinces the cell that is young.

Well, first of all, yes, yes.

And, uh, that’s, that’s, that’s my hypothesis.

And we have a whole bunch of research, uh, being done on this.

There was a study where they went into a, um, an old age home and they redid the decor

like sixties style when all these folks were really young and they, they found all kinds

of improvements in blood chemistry and stuff like that, because they say it was sort of

mentally taking them back to when, you know, when they were the way they were at that time.

I think this is a basal version of that, that basically if, if you’re finding yourself

in an embryonic environment, what’s more plausible that, that, that you’re young or, or what,

what, you know, what, like, I think, I think this is, this is the basic feature of, of biology

is to, is to update priors based on experience.

Do you think that’s actually actionable for longevity?

Like you can convince cells that they’re younger and thereby extend the lifespan?

This is what we’re trying to do.

Yeah.

Could it be as simple as that?

Why that’s not, well, I’m not claiming it’s simple.

That is in no way simple, but because, because again, you have to all, all of this, all of

the regenerative medicine stuff that we do balances on one key thing, which is learning

to communicate to the system.

We have to, if you’re going to convince that system, you know, so, so when we make a gut

tissue into an eye, you have to convince those cells that they’re priors about, we are, we

are gut precursors.

Those priors are wrong.

And you should adopt this new worldview that you’re going to be, you know, you’re going to

be an eye.

So being convincing and figuring out what, what kind of messages are convincing to, to

cells and how to speak the language and how to make them take on new, uh, new beliefs

literally is, is at the root of all of these future advances in, in birth defects and regenerative

medicine and cancer.

And that’s, that’s what’s going on here.

So I’m not saying it’s simple, but I can see the, I can see the path.

Uh, going back to the platonic space, I have to ask if, uh, if our brains are indeed

thin client interfaces to that space, uh, what does that mean for our mind?

Like, can we upload the mind?

Can we copy it?

Can we, uh, ship it over to other planets?

Like how, what, what does that mean for exactly where the mind is stored?

Yeah.

A couple of things.

So we, so we are now beyond anything that I can say with any certainty.

This is total, total conjecture.

Okay.

So, because we, we don’t know yet.

The whole point of this is we actually don’t really understand very well the relationship

between the interface and the thing.

And the thing you’re currently working on is to map.

Correct.

This space.

Correct.

And we’re, and, and we are beginning to map it, but you know, this is, this is a massive

effort.

So, um, so, so I’ll, a couple of, uh, a couple of conjectures here.

One is that I, I strongly suspect that, um, the majority of what we think of as the mind

is, is the pattern in that space.

Okay.

And one of the interesting predictions from that model, which is not a prediction of modern

neuroscience is that there should be cases where there’s very minimal brain and yet normal

IQ function.

This has been seen clinically.

We, we just, the Karina Kaufman and I reviewed this in a paper recently, a bunch of cases

of humans where there’s very little brain tissue and they have normal or sometimes above normal

intelligence.

Now things are not simple because that obviously doesn’t happen all the time, right?

Most of the time that doesn’t happen.

So, so what’s going on?

We don’t understand, but it is a very curious thing that is not a prediction of, I’m not saying,

I’m not saying it can’t, you know, you can take modern neuroscience and sort of bend

it into a pretzel to accommodate it.

You can say, well, there are these, you know, kind of redundancies and things like this,

right?

So you can accommodate it, but it doesn’t predict this.

So, uh, there, there are these incredibly curious cases.

Now, do I think you can copy it?

No, I don’t think you can, because what you’re going to be copying is the, is the interface,

the front end of the brain or the, the, whatever you, the, the action is actually the pattern

in the platonic space.

You’re going to be able to copy that.

I doubt it, but what you could do is produce another interface through which that particular

pattern is going to come through.

I think that’s probably possible.

I can’t say anything about, at this point about what that would take, but my guess is

that that’s, that that’s possible.

Is your guess, your gut, is that that process, if possible, is different than copying?

Like, it looks more like creating a new thing versus copying.

For the interface.

So if you could, so, so, so, um, so here’s my prediction for, um, Star Trek transporter.

For whatever reason, right now, your brain and body are very, uh, attuned and attractive

to a particular pattern, which is your set of psychological propensities.

If we could re, if we could rebuild that exact same thing somewhere else, I don’t see any

reason why that same pattern wouldn’t come through it the same way it comes through this

one.

That’s, that would be a guess, you know?

So, so I think what you, what you will be copying is the physical interface and hoping

to maintain whatever it is about that interface that was appropriate for that pattern.

We, we don’t really know what that is at this point.

So when we’ve been talking about mind in this particular case, it’s the most important,

uh, to me cause I’m a human, uh, the self come along with that, this, the feeling like

this mind belongs to me.

Yeah.

Does that come along with all minds?

The, the subjective, not the subjective experience.

The subjective experience is important to consciousness, but like the ownership.

I suspect so.

And I think so because of the way we come into being.

So, so one of the things that, um, I should be working on is, uh, this paper called booting

up the agent.

And it talks about the very earliest steps of becoming a being in this world.

Kind of like you can do this for a computer, right?

And before you switch the power on, it belongs to the domain of physics, right?

It obeys the laws of physics.

You switch the power on some number of what nanoseconds, microseconds.

I don’t know.

Later you have a thing that, oh, look, it’s taking instructions off the stack and doing

them.

Right.

So, so now you’re, now it’s executing an algorithm.

How did you get from, from physics to executing an algorithm?

Like what, what, what, what was happening during the boot up exactly before it starts to run

code or whatever.

Right.

And so we can ask that same question, uh, in biology.

What are the earliest steps of, uh, of becoming a being?

Yeah.

That’s a fascinating question.

Through embryogenesis, at which point is the, are you booting on?

Yeah.

Yeah, yeah, yeah, yeah.

Yeah, exactly.

Do you have a hope of an answer to that?

Well, I think, I think so.

I think so.

In, in, in two ways, um, the first thing is just physically what, what happens.

So I, I think that the, your, your first task as, uh, as a, as a being, and, and I, again,

I don’t think this is a binary thing.

I think this is a positive feedback loop that sort of cranks on, on, on, up, up and up.

Your first task as a being coming into this world is to tell a very compelling story to

your parts.

As a biological, you are made of agential parts.

Those parts need to be aligned literally into a goal.

They have no comprehension of they, if you’re going to move through anatomical space by means

of a bunch of cells, which only know physiological and, um, you know, uh, metabolic spaces and

things like that, you are going to have to develop a model and give them, uh, bend their

action space.

You’re gonna have to deform their option space with signals, with, uh, behavior shaping cues

with rewards and punishments, whatever you got your job as a, as a, as a

agent is ownership of your parts, is alignment of your parts.

I, I think that fundamentally is going to give rise to this, this, this ability.

Now, now that also means having a boundary saying, okay, this is the stuff I control.

This is me.

This other stuff over here is outside world.

I have to figure out, you don’t know where that is, by the way, you have to figure it

out.

And in embryogenesis, it’s really cool.

You can, uh, as a, as a, as a grad student, I used to do this experiment with duck embryos

with a flat blasted disc.

You can take a needle and put some scratches into it.

And every, every island you make for a while until they heal up thinks it’s the only embryo.

There’s nothing else around.

So it becomes an embryo.

And eventually you get twins and triplets and quadruplets and things like that.

But each one of them at the border, you know, they’re joined.

Well, where do I end?

then where does, where does he begin? You have to, you know, you have to know what your, what

your borders are. So, um, that action, that action of aligning your parts and coming to be this,

this, this, uh, I mean, I’m even going to say this emergence, we just don’t have a good vocabulary

for it. This, this, this emergence of a model that aligns all the parts is really critical to

keep that thing going. There’s something else that’s really interesting. And, uh, I was thinking

about this in the context of, of, of this question of like, like, you know, these, these beautiful,

um, kind of ideas, you know, that, uh, there’s, there’s this amazing thing that we found in this

is, this is largely the work of Federico Pigozzi in my group. So a couple of years ago, we saw that

networks of chemicals, um, can learn. They have five or six different kinds of learning that they can do.

And so what I asked them to do was to calculate, um, the causal emergence of those networks while

they’re learning. And what I mean by that is, is this, if you’re a rat and you learn to press

a lever and get a reward, there’s no individual cell that had both experiences, right? The cells

that you’re at your paw had touched the lever, the cells in your gut got the delicious reward.

No individual cell has that both experiences. Who owns that associative memory? Well, the rat.

So that means you have to be integrated, right? If you’re going to learn associative memories from

different parts, you have to be an integrated agent that can do that. And so we can measure

that now with metrics of causal emergence, like phi and things like that. So we know that in order

to learn, you have to have significant phi, but I wanted to ask the opposite question. What does

learning do for your phi level? Does it do anything for your degree of being an agent that is more than

the sum of its parts? So we trained the networks and sure enough, some of them, not all of them,

but some of them, as you train them, the, their, their phi goes up. Okay. And so basically what we

were able to find is that there is this, uh, positive feedback loop between every time you learn

something, you become more of an integrated agent. And every time you do that, it becomes easier to

learn. And so it’s this, it’s a virtuous cycle. It’s a virtuous cycle. It’s an asymmetry that points

upwards for agency and intelligence. And now back to our platonic space stuff, where does that come

from? It doesn’t come from evolution. You don’t need to have any evolution for this. Evolution will

optimize the crap out of it for sure, but you don’t need evolution to have this. Does it come from

physics? It comes from the rules of information, the causal information theory and the behavior of

networks. So they’re the mathematical objects. It has, it’s, this is not anything that, uh, that was,

you know, was, was given to you by physics or by a history of selection. It’s a free gift for math

and the, and, and, and those two free gift, free, uh, gifts for math locked together into a spiral

that I think causes simultaneously arise in intelligence and arise in a collective agency. And I think

that’s just, uh, you know, that’s been, you know, just, just amazing to think about.

Well, that free gift from, I think is extremely useful biology when you have small entities forming

networks, hierarchy that builds more and more complex organisms. That’s, that’s obvious. I mean,

this speaks to embryogenesis, which I think is one of the coolest things in the universe. Um, and in

fact, you acknowledge its coolness in ingressing minds paper writing quote, most of the big questions

of philosophy are raised by the process of embryogenesis right in front of our eyes. A single

cell multiplies and self-assembles into a complex organism with order on every scale of organization

and adaptive behavior. Each of us takes the same journey across the Cartesian cut, starting off as a

quiescent human oocyte, a little blob thought to be well described by chemistry and physics. Gradually,

it undergoes metamorphosis and eventually becomes a mature human with hopes, dreams, and, uh, a

self-reflective, um, metacognition that can enable it to describe itself as a, not a machine. It’s more

than its brain, body and underlying molecular mechanisms and so on. What in all of our discussion

can we say as the clear intuition, how is possible to take a leap from, uh, a single cell to a fully

functioning organism full of dreams and hopes and friends and love and all that kind of stuff

in everything we’ve been talking about, which has been a little bit technical,

like how, what, how do we understand? Cause that’s one of the most magical things the universe is able

to create, perhaps the most magical from simple physics and chemistry create this us to talking

about ourselves. I think we have to keep in mind that physics and chemistry are not real things.

They are lenses that we put on the world that, that they, they are, uh, perspectives where we say

we are for the time being, uh, for the duration of this chemistry class or career or whatever,

we are going to put aside all the other levels. I’m going to focus on this one level

and that what is fundamentally going on during that process is an amazing positive feedback loop of

collective intelligence for the interface. It’s the physical interface is scaling. It’s, uh,

the cognitive light cone that it can support. So it’s going from a molecular network. The molecular

network can already do things like Pavlovian conditioning. You don’t start with zero. When you

have a simple molecular network, you are already hosting some patterns from the platonic space that

look like Pavlovian conditioning. You you’ve already got that in starting out. That’s, that’s just a

molecular network. Then you become a cell and then you’re many cells. And now you’re navigating

anatomical morphous space and you’re hosting all kinds of other patterns. And eventually you, and,

and, and I think, again, I think there’s, and this is like what, you know, all the stuff that we’re

trying to work out now, there’s a consistent feedback between the ingressions you get and the

ability to have new ones, which again, I think it’s this like positive feedback cycle,

where the more of these free gifts you pull down, they allow you physically to develop to a ways

where, oh, look now, now, now we’re suitable for, for more and higher ones. And this continuously

goes and goes and goes until, you know, until you’re able to pull down a full human set of

behavioral capacities.

What is the mechanism of, uh, such radical scaling of the cognitive cone? Is it, is it just this kind of,

the same thing that you were talking about with the network of chemicals being able to learn?

I’ll give you two, two, uh, mechanisms that we found, but again, just to be clear,

these are mechanisms of the physical interface. What, what, what we haven’t gotten is a mature

theory of, um, how they map onto the space. That’s just like just beginning, but I’ll tell you,

I’ll tell you what the, what the physical side of things look like. The first one has to do with,

um, stress, uh, propagation. So imagine that, um, you’ve got a bunch of cells and there’s a cell

down here that needs to be up there. Okay. All of these cells are exactly where they need to go.

So they’re happy. Their stress is low. This cell, this now, now let’s imagine stress. Stress is

basically a, uh, uh, it’s a, it’s a, it’s a physical implementation of the error function. It’s

basically the amount of stress is basically the Delta between where you are now and where you need to

be not necessarily in physical position. This could be an anatomical space and physiological space and

in transcriptional space, whatever, right? It’s just, it’s just the Delta from your set point.

So, so you’re stressed out, but these guys are happy. They’re not moving. You can’t,

you can’t get past them. Now imagine if what you could do is you could leak your stress,

whatever your stress molecule is. And the cool thing is that evolution has actually conserved

these highly. So these are all, and we’re studying all of these things there. Um, they’re actually

highly conserved. If you start leaking your stress molecules, then all of this stuff around

here is starting to get stressed out. When things get stressed, it’s starting to stress out their

temperature in the, not, not physical temperature, but in the sense of like simulated annealing or

something, right? Their, their ability to, their plasticity goes up because, because they’re

feeling stressed. They need to relieve that stress. And because all the stress molecules are the same,

they don’t know it’s not their stress. They are equally irritated by them as if it was their own

stress. So they become a little more plastic. They become ready to kind of, uh, you know,

adopt different fates. You get up to where you’re going and then everybody’s stress can

drop. So, so notice what can happen by a very simple mechanism. Just be leaky for your own

stress. My problems become your problems. Not because you’re altruistic, not because you

actually care about my problems. There’s no mechanism for you to actually care about my

problems, but just that simple mechanism means that faraway regions are now responsive to the

needs of other regions, such that complex rearrangements and things like that can happen.

It’s, it’s alignment of everybody to the same goal through this very dumb, simple, um, stress

sharing thing.

Via leaky stress.

Leaky stress, right? So there’s another one, there’s another one, which I call, um, memory

anonymization. So imagine, um, here are two cells and imagine something happens to this cell

and, uh, it sends a signal over to this cell. Traditionally, you send a signal over it.

This cell receives it. It’s very clear that it came from outside. So this cell can do many

things. It could ignore it. It could believe, you know, it could take on the information. It could

just ignore it. It could reinterpret it. It could do whatever, but it’s very clear that came from

outside. Now imagine the kind of thing that we study, which is, uh, called, um, gap junctions.

These are electrical synapses that, that could directly link the internal miliars of two cells.

If something happens to this cell, it gets, let’s say it gets poked and there’s a calcium spike or

something that propagates through the gap junction here. This cell now has the same information,

but this cell has no idea. Wait a minute. Was that, is that my memory or is that his memory?

Cause it’s the same, right? It’s the same, it’s the same, uh, uh, components. And so what you’re

able to do now is to have a mind melt. You can have a mind melt between the two cells where nobody’s

quite sure whose memory it is. And when you share memories like this, it’s harder to say that I’m

separate from you. If we share the same memories, we’re kind of, and I don’t mean every single

memories, right? So they still have some identity, but to a large extent, they have a little bit of

a mind melt and there’s many complexities. You can, you can, you can lean on top of it. But what it

means is that if you have a large group of cells, they now have joint memories of what happened to us,

as opposed to, you know, what happened to you and I know what happened to me. And that enables a

higher cognitive light cone because you have greater computational capacity. You have a greater area of

concern of things you want to manage. I don’t just want to manage my tiny little memory states

because I’m getting your memories. Now I know I got to manage this, this whole thing. So, so both of

these things end up scaling the size of things you care about. And that is a major ladder, um, for

cognition is this scale, the, the, the degree of, you know, the size of concern that you have.

It’d be fascinating to be able to engineer that scaling, probably applicable to AI systems.

How do you rapidly scale the cognitive cone?

Yeah. We have some collaborators in a company called Softmax, uh, that’s, that we’re working

with to do some of that stuff. Um, in biology, that, that’s our cancer therapeutic, which is that

what you see, what you see in cancer literally is, uh, cells electrically disconnect from their

neighbors. When they were part of a giant memory that was working on making a nice organ. Well, now they

can’t remember any of that. Now they’re just amoebas and the rest of the body is just external

environment. And what we found is if you then physically reconnect them to, to, to the, to

the network, you don’t have to fix the DNA. You don’t have to kill the cells with chemo. You can just

reconnect them and they go back to, because they’re now part of this larger collective, they go back

to what they were working on. And so, so yeah, I think we can intervene at that, at that scale.

Let me ask you more explicitly on the search, the city, uh, search for unconventional terrestrial

intelligence. What do you hope to do there? How do you actually find, uh, try to find unconventional

intelligence all around us? First of all, do you think on earth there is all kinds of incredible

intelligence we haven’t yet discovered?

I mean, guaranteed we’ve, we’ve already seen in our own bodies and I don’t just mean that we are

hosts to a bunch of microbiome or any of that. I mean that your, your cells, and we have, um,

all kinds of work on this every day. They, they traverse these alien spaces, 20,000 dimensional

spaces and other spaces. They solve problems. I think they, they, they, they have, uh, they suffer

when they fail to meet their goals. They have stress reduction when they meet their goals. These

things are inside of us. They are all around us. I think that we are, we have an incredible degree

of mind blindness to all of the very alien kinds of minds around us. And I think that, you know,

looking for aliens off, off the earth is, is awesome and whatever, but if we can’t recognize

the ones that are inside our own bodies, what, what chance do we have to really, you know, uh,

to really recognize the ones that are out there?

Do you think that could be a measure like IQ for, uh, for mind? What would it be? Not

mindedness, but intelligence that’s broadly applicable to the unconventional minds. That’s

generalizable to unconventional minds where we could, uh, even, uh, quantify like, holy shit,

this discovery is incredible because it has this IQ. Yeah, I, I, yes and no. Um, the, the

yes part is that what, as we have shown, you can take existing IQ metrics. I mean, literally

existing kinds of ways that, that people used to measure intelligence of animals and humans

or whatever, and you can apply them to very weird things. If you have the imagination to

make the interface, um, you can do it and we’ve done it and we’ve shown creative problem solving

and, and all this kind of stuff. Like, so, so, so yes. However, we have to be humble about

these things and recognize that all of those IQ metrics that we’ve come up with so far were

derived from an N of one example of the evolutionary lineage here on earth. And so we are probably

missing a lot of them. So I would say we have plenty to start. We have, we have so much to

start with. We could keep, you know, tens of thousands of people busy just testing things

now, but we have to be aware that we’re probably missing, um, a lot of important ones.

What do you think has more interesting, uh, intelligent unconventional minds

inside our body, the human body, or like we were talking off mic, the Amazon jungle,

like nature, natural systems outside of, uh, like the sophisticated biological systems we’re aware of.

Yeah. We don’t know because it’s really hard to do experiments on larger systems. It’s a lot easier

to go down than it is to go up. But my suspicion is, you know, uh, like the Buddhists say, uh, innumerable

sentient beings. I think by the time you get to that degree of infinity, it kind of doesn’t

matter to compare. I suspect there’s just, uh, massive numbers of them.

Yeah. I think it really matters which kind of systems are amenable to our current methods of

scientific inquiry. I mean, I’ve spent quite a lot of hours just staring at ants when I was in the Amazon.

And it’s such a mysterious, wonderful collective intelligence. I don’t know how amenable it is to

research. I’ve seen some folks try, you could simulate, you can, but I feel like we’re missing a lot.

I’m sure we are. But, but one of my favorite things about that kind of work, um, have you seen that

there’s at least three or four papers showing that ant colonies fall for the same visual illusions that

we fall for? Not the, not the ants, the colonies. So if you, if you lay out food in particular

patterns, they’ll do things like complete lines that aren’t there and like all the same shit that

we fall for. So, so, so, you know, I don’t think it’s hopeless, but I do think that we need to a lot

of work to develop tools. Do you think all the, the tooling that we develop and the mapping that

we’ve been discussing will help us, uh, do the SETI part, finding aliens out there? I think it’s

essential. I think it’s essential. I, I, we, we are so parochial in, uh, what we expect to find in

terms of life that we are going to be just completely missing a lot of stuff. If we can, if we can’t even,

if we can’t even agree on, uh, nevermind definitions of life, but you know, uh, what’s

actually important. I, I, I, I led a paper recently where I asked, uh, whatever, 65 or so, uh, modern

working scientists, um, for a definition of life. And, and, uh, we had, we had so many different

definitions across so many different dimensions. We had to use AI to make amorphous

space out of it. And, and there was zero consensus about what actually is important,

you know, uh, and if, if we’re not good at recognizing it here, I just don’t see how we’re

going to be good at recognizing it somewhere else. So given how miraculous life is here on earth,

so it’s clear to me that we have so much more work to do that said, would that be exciting to you? If we

find life on other planets in the solar system, like what would you do with that information?

Or is that just another, another life form that we don’t understand?

I would be very excited about it because it would give us, uh, some more unconventional

embodiments to think about, right? A data point that’s pretty far away from our existing data

points, at least in this solar system. So that’d be cool. I’d be, I’d be very excited about it,

but I must admit that my, my level of, uh, my, my set point for surprise has been pushed so high at

this point that it would have to, you know, it would have to be something really weird to,

to make me shocked. I mean, I, the, the things that we see every day is just, uh, yeah.

I think you’ve mentioned in a few places that, uh, uh, like you wrote that the ingressing minds

paper is not the weirdest thing you plan to write. Yeah. Um, how weird are you going to get?

Can you hit maybe a better question is like, in which direction of weirdness do you think you will

go in your, in your life? In which direction of the weird Overton window are you going to expand?

Yeah. Well, the kind of a background to this is simply that I’ve, I’ve, I’ve had a lot of weird

ideas for many, many decades. And my general policy is not to talk about stuff until it becomes

actionable. And the amazing thing, I mean, I’m really kind of shocked, uh, is that in my lifetime,

the empirical work, like I really didn’t think we would get this far. And the knob, I have this like

mental, mental knob of, of what percentage of the weird things I think do I actually say in public?

Right. And, and every few years when the, when the, um, empirical work moves forward, I sort of

turn that knob a little, right. As we keep going. So I have no idea if we’ll continue to be that

fortunate or how long I can keep doing this or however, like, I don’t know, but, um, just to

give you, um, just to give you a direction for it, it’s going to be in the direction of what kinds

of things do we need to take seriously as other beings with, with which to relate to. So I’ve

already pushed it, you know, so like we knew brainy things and, and then we said, well, it’s not just

brains. And then we said, well, it’s not just, so, so, you know, it’s not just in physical space and

it’s not just biologicals and it’s not just complexity. There’s a couple of other steps to

take that I’m pretty sure are there, but, but we’re going to have to do the, the actual work

to make it actionable before, you know, before we really talk about it. So that, that direction.

I think it’s fair to say you’re one of the more unconventional humans, scientists, uh, out there.

Uh, so the interesting question is what’s your process of idea generation? What’s your process

of discovery from you? You’ve done a lot of really incredibly interesting, like you said, actionable,

but interesting out there, uh, ideas that you’ve actually engineered with Xenobots and Anthrobots,

these kinds of things. Like what, when you, uh, go home tonight, go to the lab, what’s the process

empty sheet of paper when you’re thinking through it?

Well, the mental part is a lot of it, uh, much like funny enough, much like making Xenobots,

you know, we, we make Xenobots by releasing constraints, right? We, we don’t do anything

to them. We just release them from the constraints they already have. And then we say, so a lot

of it is releasing the constraints that mentally have been placed on us. And, and part of it

is my, my education has been a little weird cause I was a computer scientist first and, and

only later biology. And so by the time I heard all the biology things that we typically just

take on board, I was already a little skeptical and thinking a little differently, but, um, a lot

of it comes from releasing constraints. And I very specifically think about, okay, this

is what we know. What would things look like if we were wrong? Or what would it look like

if I was wrong? What are we missing? What is our worldview specifically not able to see

right? Whatever model I have, or another way I often think is, uh, I’ll take two things that

are considered to be very different things. And I’ll say, let’s just imagine those as two

points on a continuum. What, what does that look like? What does the middle of that continuum

look like? What’s the, what’s the symmetry there? What’s the, what’s the parameter that

I can, you know, what’s the knob I can turn from to get from here to there? So those kinds

of, I look for symmetries a lot. I’m like, okay, this thing is like that way. In what

way? What’s the, what’s the fewest number of things I would have to move to make this map

onto that. Right. So, so these, so those are, you know, those are kind of mental, mental

tools. The physical process, um, for me is basically, uh, I mean, obviously I’m, I’m fortunate

to have a lot of discussions with very smart people. And so, so in my, in my group, there’s

something, you know, I’ve hired some amazing people. So we of course have a lot of discussions

and some stuff comes out of that. My process is, uh, I do pretty much, pretty much every

morning. Um, or I, I, I’m outside for sunrise and I walk around, uh, in nature. Um, there’s

just not really anything, anything better than, than as inspiration, right. Than, than nature.

I do, um, I do, I do photography and I find that it’s a good meditative tool because it keeps

your hands and brain just busy enough. Like you don’t have to think too much, but you

know, you’re sort of twiddling and looking and doing some stuff and it keeps your brain

off of the linear, like logical, like careful train of thought enough to release it so that

you can ideate a little more while, while your hands are busy.

So it’s not even the thing you’re photographing. It’s the mechanical process of doing the photography

and mentally. Right. So I, because I’m not walking around thinking, okay, let’s see. So for

this experiment, we gotta, you know, I gotta get this piece of equipment and this, like

that goes away and it’s like, okay, what’s the lighting and what’s the, what am I looking

at? And during that time, when you’re not thinking about that other stuff, then, then

they say, well, yeah, I gotta get it. I got a notebook and I’m like, look, this is, this

is what we need to do. So that, that kind of stuff.

And the actual idea of writing down stuff as a notebook, as a computer, uh, are you super

organized thinking or is it just like random words here and there with drawings?

And I would, if, and also like, what is the space of thoughts you have in your head?

Is this sort of amorphous things that aren’t very clear? Are you visualizing stuff? Uh, is

there, is there something you can articulate there?

I tend to leave myself a lot of voicemails because as I’m walking around, I’m like, oh man,

this, this idea. And so I’ll, I’ll just call my office and leave myself a voicemail for

later to, to, to transcribe. I, I don’t have a good enough memory to remember any of these

things. And so what I keep is a mind map. So I have a, I have an enormous mind map.

One piece of it hangs in my, in my lab so that people can see like, these are the ideas.

This is how they link together. Here’s everybody’s project. I’m working on this. How the hell does

this attach to everybody else’s so they can track it? The thing that hangs in the lab is

about nine feet wide. It’s a silk sheet. And I, you know, it’s, it’s out of date within

a couple of weeks of, of my, of my printing it. Cause new stuff keeps moving around.

Um, and then, and then there’s more that isn’t, you know, isn’t for anybody else’s, uh, view,

but, um, yeah, I try to, I try to be very organized because otherwise, otherwise I forget. So,

so everything is in a mind map. Things are in manuscripts. I have something like

at the right now, probably 163, 62, um, open manuscripts that are in process of being written

at various stages. And, and when things come up, I stick them in the right manuscript in

the right place so that when I’m finally ready to finalize, then, then I’ll put words around

it, whatever. But there’s like outlines of everything. So I, I try to be organized cause

I can’t, I don’t have to, you know.

So there’s a wide front of, uh, manuscripts of work that’s being done and it’s continuously

like pushing towards completion, but you’re not clear where it’s going to be finished when

and how, I mean, that’s yes. But that’s just the, that’s just the theoretical philosophical

stuff. The, the empirical work that we’re doing with in the lab. I mean, those are, we

know exactly, you know, more focused. Like we know this is, this is, you know, anthropoid

aging. This is limb regeneration. This is the new cancer paper. This is whatever. Yeah.

Those things are very linear.

Where do you think ideas come from when you’re taking a walk that eventually materialized in

a voicemail? Where’s that? What is that from you? Is that?

You know, a lot of really, some of the most interesting people feel like they’re channeling

from somewhere else.

I mean, I hate to bring up the platonic space again, but, but I mean, if you talk to any

creative, that’s basically what they, what they’ll tell you. Right. And, and certainly

that’s been my experience. So I feel, I feel like it’s a, uh, the way, the way it feels to

me is a collaboration. So the collaboration is I need to bust my ass and be, be prepped

in, in one, a to, to, to work hard, to be able to recognize the idea when it comes and

be to actually have an outlet for it. So that when it does come, we have a lab and we have

people who can, who can help me do it. And then we can actually get it out. Right. So

that’s, that’s, that’s my part is, you know, be, be up at 4 30 AM doing your thing and be

ready for it. But the other side of the collaboration is that, yeah, when you do that, like

amazing ideas come in, you know, to say that it’s me, I don’t think would be, would be

right. Uh, I, you know, I think it’s, it’s definitely coming from, from other places.

What advice would you give to scientists, PhD students, grad students, young scientists

that are trying to explore the space of ideas, given the very unconventional, non-standard,

unique set of ideas you’ve explored in your life and career?

Um, let’s see. Uh, well, the first and most important thing I’ve learned is not to take

too much advice. And so I don’t like to give too much advice, but, um, but I do have one

technique that I found very useful and this isn’t for everybody, but there’s a specific

demographic because a lot of, a lot of, um, unconventional people reach out to me and I

try to, um, respond and help them and so on. This is a technique that I think is useful

for some people. How do I describe it? You need to, uh, it’s, it’s, it’s a, it’s the

act of bifurcating your mind and you need to have two different regions. One region is the

practical region of impact. In other words, how do I get my idea in, out into the world

so that other people recognize it? What should I say? What are people hearing? What are they

able to hear? How do I pivoted? What parts do I not talk about? What, which journal am I going to

publish? Is it time now? Do I wait two years for this? Like all the practical stuff that is all

about how it looks from the outside, right? All the stuff that I can’t say this, or I should say

this differently, or this is going to freak people out, or this is, uh, you know, this community wants

to hear this so I can pivot it this way, like all that practical stuff. It’s got to be there.

Otherwise you’re not going to be in a position to follow up any of your ideas. You’re not going to

have a career. You can’t, you’re not gonna have resources to do anything, but it’s very

important that that can’t be the only thing you need. Another part of your mind that ignores all

that shit completely, because this other part of your mind has to be pure. It has to be. I don’t

care what anybody else thinks about this. I don’t care whether this is publishable, describable. I

don’t care if anybody gets it. I don’t care if anybody thinks it’s stupid. This is, this is what I,

what I think and why, and, and give it space to, to sort of grow. Right. And if you keep the,

if you try to mush them, if you try to mush them together, I found that impossible because,

because the practical stuff poisons the other stuff. If you’re, if you’re too much,

if you’re too much on the creative end, you can be an amazing thinker. It just nothing ever

materializes. But if you’re very practical, it tends to poison the other stuff because

the more you think about how to present things so that other people get it, it, it constrains and it,

and it bends how you start to think. And, you know, uh, what I tell my students and others is

there’s two kinds of advice. There’s very practical, specific things. Like somebody says,

well, you forgot this control, or this isn’t the right method, or you shouldn’t be, that stuff is

gold and you should take that very seriously and you should use it to, to improve your craft. Right.

And that’s like super important. But then there’s the meta advice where people like, that’s not a good

way to think about it. Don’t work on this. This isn’t that, that stuff is, is, is garbage. And,

and even very successful people often give very constraining, terrible advice. Like one of my,

one of my reviewers in a paper years ago said, I love this, uh, Freudian slip. He said, he’s going

to give me constrictive criticism. Right. And that’s exactly what he gave me was constrictive

criticism. I was like, that’s awesome. Uh, that’s a great typo.

Well, it’s very true. I mean, that, that second, the bifurcation of the mind is beautifully put.

I do think some of the most interesting people I’ve met are sometimes, uh, fall short on the,

on the normie side, on the practical, how do I have any emotional intelligence of how do I

communicate this with people that have a very different worldview that are more conservative

and more, uh, conventional and more kind of fit into the norm. You have to be able to have the

skill to fit in. Yeah. And then you have to, again, beautifully put, be able to shut that off

when you go on your own and think and having two skills is very important. I think a lot of radical

thinkers think that they’re sacrificing something by learning the skill of fitting in. But I think

if you want to have impact, if you want ideas to resonate and actually lead to, um, first of all,

be able to build great teams that help bring your ideas to life. And second of all, for your ideas

to have impact and to scale and to, uh, resonate with a large number of people, you have to have

that skill. Yeah. And those are, those are very different. Those are very different. Yeah. Uh,

let me ask a ridiculous question. You already spoke about it, but, uh, what to you is one of the most

beautiful ideas that you’ve encountered in your various explorations. Maybe, maybe not just beautiful,

but one that makes you happy to be a scientist, to be able to, um, be a curious humans exploring ideas.

I mean, I must say that, you know, I, I sometimes think about, um, these, these ingressions from this,

from this space as a kind of steganography, you know, so, so steganography is, is when you hide

data and messages within the, the bits of another pattern that don’t matter. Right. And the rule of

steganography is you can’t mess up the main thing, you know, it’s a picture of a cat or whatever. You

got to keep the cat, but if there’s bits that don’t matter, you can kind of stick stuff. So I feel

like, I feel like all these ingressions are a kind of universal steganography that there’s this,

like these patterns seep into everything everywhere they can. And they’re kind of, they’re kind of

shy, meaning that they’re, they’re, they’re very subtle, not invisible. If you work hard, you can

catch them, but, but they’re not invisible, but, but, but they’re hard to see. And the fact that the

fact that I think they also affect quote unquote machines, as much as they certainly affect living,

living organisms, I think is incredibly, incredibly beautiful. And I personally am happy to be

part of that same spectrum. And the fact that, that, that magic is sort of, uh, applicable to

everything. I, I’ve, a lot of people find that extremely disturbing. And that’s, that’s some of

the, some of the hate mail I get is like, yeah, we were with you, you know, on the majesty of life

thing until you got to the fact that machines get it too. And now, now like, like terrible, right?

You’re kind of devaluing the, the, the majesty of life. And I don’t, I don’t know. I, I,

the, the idea that we’re now catching these patterns and we’re able to do meaningful research

on the, on the interfaces and all that is just to me, absolutely beautiful. And that, that it’s all

one spectrum. I think to me is, is amazing. I’m, I’m, I’m, I’m enriched by it.

I agree with you. I think it’s incredibly beautiful. I lied. There’s any more ridiculous question.

Uh, so it, it seems like we are progressing towards possibly creating a super intelligence

system. Um, and AGI and ASI, uh, if I had one, gave it to you, put you in the room, what

would be the first question you ask it? Maybe the first set of questions. Like there’s so

many topics that you’ve worked on and are interested in. Is there a first question that you really

just, if you can get an answer, solid answer? Well, the first thing I would ask is, uh, how

much should I even be talking to? Uh, for sure, because it’s not clear to me at all that getting

somebody to tell you an answer in the long run is optimal. It’s the difference between when

you’re a kid learning math and having an older sibling, they’ll just tell you the answers,

right? Like sometimes it’s just like, come on, just give me the answer. Let’s move on

with this, you know, cancer protocol and whatever, like great. But in the long run, the process

of discovering it yourself, how much of that are we willing to give up? And by getting a

final answer, how much have we missed of stuff we might’ve found along the way? Now, I don’t

know what the thing is. I, you know, I don’t think it’s correct to say, don’t do that at

all. You know, take, take the time and all the blind alleys. And like that may not be

optimal either, but we don’t know what the optimal is. We don’t know how much we should

be stumbling around, stumbling around versus having somebody tell us the answer.

That’s actually a brilliant question to ask AGI then.

I mean, if it’s really, if it’s really an AGI, I’m like, tell me what the balance is.

Like how much should I be talking to you versus stumbling around in the lab and making all

my, you know, all my, my own mistakes. It was at 70, 30, you know, 10, 90. I don’t know.

So that would be, that would be the first.

AGI, I will say you shouldn’t be talking to me.

It may well be. It may say, what the hell did you make me for in the first place? You

guys are screwed. Like that’s possible. Um, uh, yeah.

You know, the second question I would ask you is, uh, what’s the, what’s the answer I should

be, what’s the question I should be asking you that I probably am not smart enough to ask

you. That’s the other thing I would say.

This is really complicated. That’s a really, really strong question. Um, but again, there,

the answer might be, you wouldn’t understand the question it proposes most likely. So I

think for me, I would probably assuming you can get a lot of questions, I’ll probably go

for questions where I would understand the answer. Like it would uncover some small mystery that

I’m super curious about. Cause if you ask big questions like you did, which is really strong

questions. I just feel like I wouldn’t understand the answer. If you ask it, what question should

I be asking you, it would probably say something like, uh, it’ll say something like, what is

the shape of the universe? And you’re like, what, why is that important? Right? You, you

would be very confused by the question it proposes. Yeah. I would probably want to, it would just

be nice for me to know straight up first question, how many living intelligent alien civilizations

are in the observable universe. Yeah. That would just be nice to know if, is it zero or is it a lot? I just

want to know that. And then, and unfortunately it might answer, it might, it might be a, give me a,

a, a Michael Levin answer.

That’s what I was about to say is that my guess is it’s going to be exactly the problem you said,

which is, is going to say, Oh my God. I mean, right in this room, you got, you know,

like, Oh man. Yeah. Yeah. Yeah. Everything you need to know about alien civilizations is right

here in this room. In fact, it’s inside your own body. Thank you. AGI. Thank you. All right,

Michael, dear one, one of my favorite scientists, one of my favorite humans. Thank you for everything

you do in this world. Thank you so much. Truly, truly fascinating work and keep going for all of

us. Thank you so much. Thank you so much. It’s great to see you. Like it’s always, always a good

discussion. Yeah. Thank you so much. I appreciate this. Thank you. Thanks for listening to this

conversation with Michael Levin to support this podcast. Please check out our sponsors in the

description, where you can also find links to contact me, ask questions, get feedback and so on.

And now let me leave you with some words from Albert Einstein.

The most beautiful thing we can experience is the mysterious. It is the source of all true art