• Hi Guest: Welcome to TRIBE, the online home of TRIBE MAGAZINE. If you'd like to post here, or reply to existing posts on TRIBE, you first have to register. Join us!

Human Singularities


TRIBE Member
You may have seen the online ad regarding the advent of human language as the latest "singularity" that humans have experienced. That those humans pre-language could never really participate in the society that is post-language.

The ad suggests that we are rapidly approaching the next singularity, where our advances in technology are pushing us to a point where those that survive it, or are born after it, will have severe or impossible difficulty relating to those from before the singularity.

Finding lots of free time to do so, I've kept abreast of the advances in AI, for example the GO player in South Korea and earlier Deeper Blue and Chess, as well as Boston Dynamics walking robot (the Northern Drunk Robot take off was particularly funny to me), but also advances in genetics (for example the Crispr technique). Intel reaching its Moore's law limit at 10nm and looking toward post silicon semiconductors. 10nm is only 14 silicon atoms.

If we connect a slightly more advanced AI to the Crispr technology, who is to say that AI cannot literally invent new life. When, not if, but when that happens, what does humanity have to say about it's place in the world?

There may not be a need for physical robots, there may not be a need for physical humans; I'm going on a limb and saying there will always be a need for intelligence whether it's human or artificial.....

A couple films popular with millenials touched on this topic recently: Ex Machina for one, and ... I forget the other one.

A half generation ago the same ideas were explored in "AI" with Joel Haley Osmund and Jude Law. What have we learned in that time?

Are we obsoleting ourselves, or augmenting ourselves? Are we creating a work free utopia or are we running a race to the bottom?

A common AI theme is the ability or need for AI to have or at least understand human emotions. This was repeatedly played by the character Data in Star Trek TNG but also earlier in the film D.A.R.Y.L. which took an 80's take on the same topic. Japan has made great strides in building robots that mimic human expression but obviously not human emotion. What would human emotion look like in AI? Is it something that is just simulated sufficiently that it crosses the uncanny boundary? Or can it be truly replicated in some kind of machinery made by man?

Those are enough questions for me to drink till Monday so... I welcome your thoughts.

Last edited:
Alex D. from TRIBE on Utility Room


TRIBE Member
Rereading my post, I have a self-question about the comment that "... there will always be a need for intelligence ..."

This was an assumption I held until a moment ago. In HG Wells "The Time Machine" he looked towards a future where there wasn't really any intelligence. Human intelligence had somehow become ... extinct. The Warlocks preyed on the naive humans above but they too didn't seem to exhibit any intelligence beyond harvesting humans.

So the natural question is, can we literally outsmart ourselves? Will our intellect be our own demise? Carl Sagan touched on this topic repeatedly, but perhaps he was just a product of the cold war.

Is it inevitable that we always become smarter? The Egyptians built the pyramids somehow, and yet today nobody knows how that was accomplished. What about all the media I have on CDROM? I'm working from a computer now that has no CDROM drive, does this mean I will lose all the data I worked so hard to collect? In the movie "The Time Machine" the time traveller destroys books into dust by a simple swerve of the arm. Is all my collection of data destined to the same fate?