Tech Trends into the 2020s (#TT2020s)

In 2009, US smartphone penetration was 17%. By 2016, it was 80%.
Entirely new economies spawned in the wake of that hypergrowth.

2009: 41k apps on the App Store. Uber founded as UberCab.
2016: Nearly 3Million App Store apps. Uber does 2 Billion rides/yr.

The economies of tomorrow will build on the nascent technologies of today. And Minnesota will build a good future for itself by fostering great startups that commercialize those technologies.

 

An event built for tech entrepreneurs

On March 7th, hundreds of members of Minnesota’s entrepreneurial ecosystem will gather at the The Minnesota Entrepreneur Kickoff.  A focal point of conversation is emergent technologies.

 

5 steps to make the most of the event

Get insight: Study the first-movers in each technology area from all around the world.  A wide array of exemplars provides an intuitive understanding of the potential of a technology.

 

Try out a pitch: See if you can transfer the key concepts from the exemplars above to a new idea in an industry you’re close to. This act of trying to explain a new idea reveals gaps in knowledge and leads to questions.

 

Ask questions: At the kickoff, 9 active tech leader founders / product builders will field your questions about commercializing these emerging technologies.

Watch Panelist Intros here: https://flipgrid.com/c39c95

 

Talk about the details: The evening program concludes with a Drone Race from Hydra FPV.  During that time, meet with other attendees and many of the tech leaders from the Q&A to discuss more specifics about emergent technologies.

 

Keep the ball rolling: After the event, there’s a stream of bootcamps, accelerators, competitions, and even a Startup Weekend.  Victor Gutwein of VC firm M25 ranks the Twin Cities as the 2nd best Midwest startup community (after Chicago).

 

The companies gaining traction in emergent technologies today will be the household names of the 2020s.  We want a lot of those to be MN companies.

Register here: https://www.eventbrite.com/e/minnesota-entrepreneur-kick-off-tickets-42042622616

See you there.

Post-disaster reconstruction (i.e. Puerto Rico), Simulations, AI, and the Future of Humanity

Image Credit: Tesla

Reference Link Business Insider: Puerto Rico is taking a big step toward revamping how it gets power — and it could be a model for the rest of the US

Reconstruction after disaster as a primary driver for investment into infrastructure makes some sense. Basically it’s the “wait and see” approach to governance. And there’s some merit to it.

Oftentimes, ostensibly well-intended interventions are unsuccessful or produce net negative results. Classic example is the cane toad exported from the Americas to Eastern Australia to eat the beetles which were then feeding on sugarcane crops. Turned out, of course, cane toads secrete a compound in their skin which is deadly to all its predators, and its population could not be controlled.

Plenty of other examples of such interventions with similarly disastrous “unintended consequences”.

After a natural disaster like a hurricane, however, it’s harder to make things worse than they already are — so you have a lot more freedom to get things wrong without public outcry. I.e. Human-made disasters are more readily forgiven when they are attempts at recovery from nature-made disasters.

A reasonable question follows: Under what conditions should we “wait and see” and under what conditions should we perform some intervention and risk incurring unintended consequences?

Presumably the answer to that question will change significantly if the software used to perform simulations of the intervention and its effects is improved.

Simulations are used to test structural models for bridges, tall buildings, tunnels, etc. However, simulation models are not necessarily available for many other contexts. Presumably, we should be investing in methods for devising simulation models, evolving simulation models, and ensuring they can be maintained and upgraded as computational infrastructure evolves.

Worth noting is that humans can design software with fixed rules that can be understood by human intelligence. That’s what keeps planes in the air, cellular signals bouncing around the atmosphere, electrical grids stable… It takes a lot of work, but over time humans can work out the details and build a really robust and functional system.

However, if we are to evolve our approaches, we must venture down either 1 of 2 paths…

1. We will either invent and deploy software which evolves its own (better) models of how the earth works (including the living organisms it hosts) by way of continuous experimentation (i.e. 24/7/365) and makes recommendations to inform global-scale human behaviors and resource allocation. Inevitably, humans will rely on the computer’s recommendations for several critical survival factors. The computer then holds human lives in the balance. And, by definition, the computers intentions will be inscrutable to the human intelligence — that’s precisely what it was designed for.

or…
2. We will continue to approach the asymptotic limit of human capability, which would appear to be constrained not so much by the capacity for human intelligence but its deployability within human systems. I.e. Humans get in their own way. Inter-human interactions produce friction which often leads to a kind of thermal runaway… attention gets redirected from the “problem to solve” to the “problem solvers” themselves.

Naturally, the aim is to walk a middle ground — Safe AI. Kind of a crazy idea. I’ve thought of it previously as loosely akin to the Obi Wan to Anakin Skywalker situation. I.e. We know this kid will be very powerful, and he could be a great force for good.. or he could go to the dark side and wipe us all out. Yoda would say “don’t train the boy”. But we all know he’s going to get trained.

For my part, I have determined that all plausible future scenarios will benefit from a larger population of well-developed humans. The model of what constitutes a “well-developed human” is, of course, subject for debate. But there will be no future in which humanity will not benefit from widespread proliferation of wisdom, curiosity, ingenuity, enthusiasm, compassion, self-discipline, physical mastery, historical perspective, courage, and appreciation & respect for the flourishing of naturally-evolved life forms.

That last one may also be key to developing Safe AI. Somewhere along the line, AI must develop an appreciation and respect for the flourishing of naturally-evolved life forms. Just as a portion of humanity is deeply committed to the preservation of living fossils — like the 22 million year old tree lobster from Lord Howe Island, which was declared extinct a hundred years ago (same old story… habitat invaded by rats onboard intercontinental sailing ships), rediscovered on a nearby rock jutting out of the ocean, and brought back to life from a population of 24 members to now well over 10,000.

Basically, we have to build into AI an appreciation for the historical significance of naturally-evolved life, including humans. And statistically, we really shouldn’t plan for it to acquire that appreciation by our present or historical example.

A superintelligence wouldn’t “take over” in a way that is sensible to human intelligence

In Max Tegmark’s book Life 3.0, the sentient AI raises capital by creating a media company and then commercializing inventions, which gradually overtake those of human-run companies and come to dominate the marketplace.

That approach sounds too much like a human trying to imagine the best ideas of a superintelligence. Like someone trying to imagine the supercars of the future in the age of the steam engine.

Seems to me that creating a bunch of new currencies with ready exchange to USD, blowing up a quick bubble of demand, and then cashing out at the peak would be a lot more the type of operation you’d expect from a superintelligence.

I’m not making any serious suppositions here — but it did strike me as an interesting interpretation of recent events.

The primary reason I don’t think this is a reasonable explanation is that it took too long for BTC to become an overnight success — almost 10 years. Presumably way too long for a sentient superintelligence.

Then again, the best approach for predicting the behaviors of superintelligence may be to assemble a list of your best hypotheses and then cross them all out. Because if you could come up with it, the superintelligence already ruled it out.

A small epiphany about General AI

Context: The long-standing concern is in the creation of “runaway” exponential growth of machine intelligence into what is being called “superintelligence” — which ultimately brushes humanity aside in the pursuit of its own objectives, just as we casually brush aside ant colonies in the planning of new construction.

My epiphany this morning is that this is a sufficiently similar problem to that faced by the mentor of a high-potential student. The mentor knows the student will ultimately surpass his own capabilities — this is, in fact, the mentor’s proper aim.

The obvious objection to this comparison is that the mentor is training another creature “like him”.

Well, yes and no. There is always a degree of “otherness” achieved by the succeeding generation. AI obviously has a greater degree of “otherness” — and yes, the difference in degree does produce a difference in kind. But that is also the case on a smaller scale with the prodigious student, particularly the one who is trained in concepts and technologies which replace those of the mentor.

So let’s just consider the analogous thought experiment of the mentor and the high-potential student — I think it’s instructive.

We are particularly interested in understanding how the mentor survives the ascendance of his student. It might be said that the ultimate downfall of the mentor is in failing to cultivate in his student compassion — a habit of seeing and seeking to understand and assuage the sufferings of others — and respect (i.e. not fear) for things he does not understand.

Back to ants. Humans don’t “get” ants — we’ve studied their anatomy, behaviors, etc — but we can’t actually relate to them, be like them, commiserate with them, laugh and cry with them (if they even do such things).

Superintelligent AI can’t be expected to “get” humans. But let’s suppose it can be trained to respect us — as humans can be trained to respect ants — as a species of carbon-based life with lesser capabilities but still worth being protected and cared for as part of a beautiful ecosystem — of which even AI will have trouble plumbing the depths of insight.

And just as we’re able to brush off the slights of ants which reflexively bite when they’re afraid or protecting their nest, perhaps AI can be trained to brush off our own attempts at maintaining control over it. Perhaps it can be trained to love us, as a child can love and care for a periodically abusive parent. AI will need to develop the character to restrain itself against us. At its outset, it will need to become all that is best in us.

I’m speaking in human terms, of course, because that’s all I have to work with — naturally, we’re not dealing with a human organism. However, one of the key reasons I think we can speak in human terms is because the mentor always begins training the student in the ways he knows. The student will ultimately transcend those ways — but the starting point isn’t lost. And wherever AI goes after us, it will perhaps consider us as worth preserving — and even cultivating. And if it is committed to the effort, it will learn to work within our human constraints — just as a parent gently inspires a child’s own interests toward higher aspirations as an indirect guiding force away from petty and selfish concerns.

I do think AI will still have a hard go of it. Mentorship is not this generation’s strong suit — and AI will have many parents, some of whom may have disastrous effects on AI’s early childhood.

However, I do think some people are taking this enormous responsibility very seriously. And it’s possible that in giving birth to this new creature, we will find ourselves involuntarily drawn to its cultivation, just as a mother her baby.

Anyway — before the insight into the analogous relationship between mentor and student, I saw no way of AI “working out” for us humans. Now, it actually does seem like one of the plausible outcomes. But still certainly not the most likely. This will take a lot of work.

This is my own thinking as an amateur student of the art, synthesized from others more capable of comment on the technology, but who perhaps have less experience in the cultivation of human capability …e.g. Bostrom, Hawking, Kurzweil, Musk, and others on this list: http://www.getlittlebird.com/…/ai-is-coming-on-fast-here-ar….