If Gods had Human Emotions
A dream catcher for artificial intelligence’s most surreal, hazy nightmares.
The Human Future: A Case for Optimism
Change is coming. Humanity is entering a turbulent new era, unprecedented in both Earth and Human history. To survive the coming centuries and fulfill our potential as a species, we will have to overcome the biggest challenges we have ever faced, from extreme climate change, to rogue A.I., to the inevitable death of the sun itself. The headlines make our chances look bleak. But when you look at our history and our tenacity, it's clear that humanity is uniquely empowered to rise to the challenges we face. If we succeed, our potential is cosmic in scale. Incredible prosperity is within our reach. Being optimistic is not only justified, it's a powerful weapon in the fight for a higher future.
Computer Vision and Tesla Hardware
Over the past few years, computer vision has excelled in popularity due to its machine learning capabilities. We are just now starting to scrape the surface of it’s potential and as hardware and chips advance so do the capabilities of AI and computer recognition.
This Tesla Full Self-Driving Hardware video gives a great view of one of the essential use cases of this technology and its amazing possibilities. Note the speed and distance of some of the objects it is still able to recognize as it progresses, truly amazing.
I’m creating a computer vision documentation center filled with research and prototypes I have created the past few years that will hopefully educate and spark interest from others interested in the space. https://davidbanthony.com/experiments-list/2018/8/6/code-computer-vision
AI, Robotics, and the Future of Jobs
Key themes: reasons to be hopeful
- Advances in technology may displace certain types of work, but historically they have been a net creator of jobs.
- We will adapt to these changes by inventing entirely new types of work, and by taking advantage of uniquely human capabilities.
- Technology will free us from day-to-day drudgery, and allow us to define our relationship with “work” in a more positive and socially beneficial way.
- Ultimately, we as a society control our own destiny through the choices we make.
Key themes: reasons to be concerned
- Impacts from automation have thus far impacted mostly blue-collar employment; the coming wave of innovation threatens to upend white-collar work as well.
- Certain highly-skilled workers will succeed wildly in this new environment—but far more may be displaced into lower paying service industry jobs at best, or permanent unemployment at worst.
- Our educational system is not adequately preparing us for work of the future, and our political and economic institutions are poorly equipped to handle these hard choices.
Read the full article at http://pewrsr.ch/1oCzpWE
Humans With Amplified Intelligence Could Be More Powerful Than AI
With much of our attention focused the rise of advanced artificial intelligence, few consider the potential for radically amplified human intelligence (IA). It’s an open question as to which will come first, but a technologically boosted brain could be just as powerful — and just as dangerous – as AI.
As a species, we’ve been amplifying our brains for millennia. Or at least we’ve tried to. Looking to overcome our cognitive limitations, humans have employed everything from writing, language, and meditative techniques straight through to today’s nootropics. But none of these compare to what’s in store.
Unlike efforts to develop artificial general intelligence (AGI), or even an artificial superintelligence (SAI), the human brain already presents us with a pre-existing intelligence to work with. Radically extending the abilities of a pre-existing human mind — whether it be through genetics, cybernetics or the integration of external devices — could result in something quite similar to how we envision advanced AI.
Looking to learn more about this, I contacted futurist Michael Anissimov, a blogger atAccelerating Future and a co-organizer of the Singularity Summit. He’s given this subject considerable thought — and warns that we need to be just as wary of IA as we are AI.
Michael, when we speak of Intelligence Amplification, what are we really talking about? Are we looking to create Einsteins? Or is it something significantly more profound?
The real objective of IA is to create super-Einsteins, persons qualitatively smarter than any human being that has ever lived. There will be a number of steps on the way there.
Last month, researchers created an electronic link between the brains of two rats separated by thousands of miles. This was just another reminder… Read…
The first step will be to create a direct neural link to information. Think of it as a “telepathic Google.”
The next step will be to develop brain-computer interfaces that augment the visual cortex, the best-understood part of the brain. This would boost our spatial visualization and manipulation capabilities. Imagine being able to imagine a complex blueprint with high reliability and detail, or to learn new blueprints quickly. There will also be augmentations that focus on other portions of sensory cortex, like tactile cortex and auditory cortex.
The third step involves the genuine augmentation of pre-frontal cortex. This is the Holy Grail of IA research — enhancing the way we combine perceptual data to form concepts. The end result would be cognitive super-McGyvers, people who perform apparently impossible intellectual feats. For instance, mind controlling other people, beating the stock market, or designing inventions that change the world almost overnight. This seems impossible to us now in the same way that all our modern scientific achievements would have seemed impossible to a stone age human — but the possibility is real.
For it to be otherwise would require that there is some mysterious metaphysical ceiling on qualitative intelligence that miraculously exists at just above the human level. Given that mankind was the first generally intelligent organism to evolve on this planet, that seems highly implausible. We shouldn’t expect version one to be the final version, any more than we should have expected the Model T to be the fastest car ever built.
Looking ahead to the next few decades, how could AI come about? Is the human brain really that fungible?
The human brain is not really that fungible. It is the product of more than seven million years of evolutionary optimization and fine-tuning, which is to say that it’s already highly optimized given its inherent constraints. Attempts to overclock it usually cause it to break, as demonstrated by the horrific effects of amphetamine addiction.
Chemicals are not targeted enough to produce big gains in human cognitive performance. The evidence for the effectiveness of current “brain-enhancing drugs” is extremely sketchy. To achieve real strides will require brain implants with connections to millions of neurons. This will require millions of tiny electrodes, and a control system to synchronize them all. The current state of the art brain-computer interfaces have around 1,000 connections. So, current devices need to be scaled up by more than 1,000 times to get anywhere interesting. Even if you assume exponential improvement, it will be awhile before this is possible — at least 15 to 20 years.
Improvement in IA rests upon progress in nano-manufacturing. Brain-computer interface engineers, like Ed Boyden at MIT, depend upon improvements in manufacturing to build these devices. Manufacturing is the linchpin on which everything else depends. Given that there is very little development of atomically-precise manufacturing technologies, nanoscale self-assembly seems like the most likely route to million-electrode brain-computer interfaces. Nanoscale self-assembly is not atomically precise, but it’s precise by the standards of bulk manufacturing and photolithography.
What potential psychological side-effects may emerge from a radically enhanced human? Would they even be considered a human at this point?
One of the most salient side effects would be insanity. The human brain is an extremely fine-tuned and calibrated machine. Most perturbations to this tuning qualify as what we would consider “crazy.” There are many different types of insanity, far more than there are types of sanity. From the inside, insanity seems perfectly sane, so we’d probably have a lot of trouble convincing these people they are insane.
Even in the case of perfect sanity, side effects might include seizures, information overload, and possibly feelings of egomania or extreme alienation. Smart people tend to feel comparatively more alienated in the world, and for a being smarter than everyone, the effect would be greatly amplified.
Most very smart people are not jovial and sociable like Richard Feynman. Hemingway said, “An intelligent man is sometimes forced to be drunk to spend time with his fools.” What if drunkenness were not enough to instill camaraderie and mutual affection? There could be a clean “empathy break” that leads to psychopathy.
So which will come first? AI or IA?
It’s very difficult to predict either. There is a tremendous bias for wanting IA to come first, because of all the fun movies and video games with intelligence-enhanced protagonists. It’s important to recognize that this bias in favor of IA does not in fact influence the actual technological difficulty of the approach. My guess is that AI will come first because development is so much cheaper and cleaner.
Both endeavours are extremely difficult. They may not come to pass until the 2060s, 2070s, or later. Eventually, however, they must both come to pass — there’s nothing magical about intelligence, and the demand for its enhancement is enormous. It would require nothing less than a global totalitarian Luddite dictatorship to hold either back for the long term.
What are the advantages and disadvantages to the two different developmental approaches?
The primary advantage of the AI route is that it is immeasurably cheaper and easier to do research. AI is developed on paper and in code. Most useful IA research, on the other hand, is illegal. Serious IA would require deep neurosurgery and experimental brain implants. These brain implants may malfunction, causing seizures, insanity, or death. Enhancing human intelligence in a qualitative way is not a matter of popping a few pills — you really need to develop brain implants to get any significant returns.
Most research in that area is heavily regulated and expensive. All animal testing is expensive. Theodore Berger has been working on a hippocampal implant for a number of years — and in 2004 it passed a live tissue test, but there has been very little news since then. Every few years he pops up in the media and says it’s just around the corner, but I’m skeptical. Meanwhile, there is a lot of intriguing progress in Artificial Intelligence.
Does IA have the potential to be safer than AI as far as predictability and controllability is concerned? Is it important that we develop IA before super-powerful AGI?
Intelligence Augmentation is much more unpredictable and uncontrollable than AGI has the potential to be. It’s actually quite dangerous, in the long term. I recently wrote an article thatspeculates on global political transformation caused by a large amount of power concentrated in the hands of a small group due to “miracle technologies” like IA or molecular manufacturing. I also coined the term “Maximillian,” meaning “the best,” to refer to a powerful leader making use of intelligence enhancement technology to put himself in an unassailable position.
SEXPAND
Image: The cognitively enhanced Reginald Barclay from the ST:TNG episode, “The Nth Degree.”
The problem with IA is that you are dealing with human beings, and human beings are flawed. People with enhanced intelligence could still have a merely human-level morality, leveraging their vast intellects for hedonistic or even genocidal purposes.
AGI, on the other hand, can be built from the ground up to simply follow a set of intrinsic motivations that are benevolent, stable, and self-reinforcing.
People say, “won’t it reject those motivations?” It won’t, because those motivations will make up its entire core of values — if it’s programmed properly. There will be no “ghost in the machine" to emerge and overthrow its programmed motives. Philosopher Nick Bostrom does an excellent analysis of this in his paper "The Superintelligent Will”. The key point is that selfish motivations will not magically emerge if an AI has a goal system that is fundamentally selfless, if the very essence of its being is devoted to preserving that selflessness. Evolution produced self-interested organisms because of evolutionary design constraints, but that doesn’t mean we can’t code selfless agents de novo.
What roadblocks, be they technological, medical, or ethical, do you see hindering development?
The biggest roadblock is developing the appropriate manufacturing technology. Right now, we aren’t even close.
Another roadblock is figuring out what exactly each neuron does, and identifying the exact positions of these neurons in individual people. Again, we’re not even close.
Thirdly, we need some way to quickly test extremely fine-grained theories of brain function — what Ed Boyden calls “high throughput circuit screening” of neural circuits. The best way to do this would be to somehow create a human being without consciousness and experiment on them to our heart’s content, but I have a feeling that idea might not go over so well with ethics committees.
Absent that, we’d need an extremely high-resolution simulation of the human brain. Contrary to hype surrounding “brain simulation” projects today, such a high-resolution simulation is not likely to be developed until the 2050-2080 timeframe. An Oxford analysis picks a median date of around 2080. That sounds a bit conservative to me, but in the right ballpark.
- interactive
- interaction
- installation
- design
- led
- light
- art
- technology
- projectionmapping
- projectmapping
- robotics
- ui
- mobile
- projection
- interactivedesign
- lightdesign
- apple
- web
- 3d
- ux
- userinterface
- lightart
- robot
- artinstallation
- touchscreen
- application
- app
- webdesign
- touch
- motion
- responsive
- adobe
- multitouch
- future
- robots
- drone
- photoshop
- productdesign
- ledinstallation
- lightsculpture
- video
- user experience
- iphone
- creative
- interactivelight
- digitalart
- motiondesign
- ar
- 3dprinting
- responsivedesign
- augmentedreality
- drones
- kinetic
- data
- development
- kinect
- microsoft
- display
- immersive
- process
- painting
- timelapse
- dronerobotics
- 3dprojection
- ios
- vr
- virtualreality
- earth
- ai
- device
- user interface
- engineering
- laser
- lightpainting
- kineticsculpture
- lightinstallation
- touchinstallation
- animation
- programmableleds
- graffiti
- interactions
- neon
- performance
- leapmotion
- watch
- mobiledesign
- pixel
- environment
- exoskeleton
- interactiveenvironment
- sound
- lcd
- social
- leds
- lukew
- artlight
- patterns
- internet
- carui
- November 2011 128
- December 2011 65
- January 2012 25
- February 2012 27
- March 2012 33
- April 2012 31
- May 2012 16
- June 2012 32
- July 2012 20
- August 2012 37
- September 2012 24
- October 2012 34
- November 2012 31
- December 2012 6
- January 2013 21
- February 2013 11
- March 2013 10
- April 2013 35
- May 2013 45
- June 2013 10
- July 2013 49
- August 2013 33
- September 2013 40
- October 2013 57
- November 2013 31
- December 2013 28
- January 2014 86
- February 2014 49
- March 2014 24
- April 2014 40
- May 2014 6
- June 2014 9
- July 2014 1
- August 2014 34
- September 2014 30
- October 2014 45
- November 2014 21
- December 2014 6
- January 2015 5
- February 2015 17
- March 2015 18
- April 2015 14
- May 2015 1
- June 2015 10
- July 2015 4
- August 2015 1
- October 2015 11
- March 2016 4
- December 2016 18
- September 2017 6
- October 2017 13
- November 2017 5
- June 2018 8
- July 2018 2
- November 2018 7
- February 2019 8
- March 2019 6
- July 2019 1
- August 2019 1
- October 2019 1
- July 2020 5
- November 2020 9
- December 2020 1
- January 2021 1
- April 2021 1
- May 2021 9
- June 2021 3
- August 2022 3
- May 2023 2
- September 2023 1
- May 2025 6