Hidden deep in robotics labs around the world, a new generation of intelligent machines is learning to breed and evolve.
Just like humans, these robots are able to “give birth” to new versions of themselves, with each one better than the last. They are precise, efficient and creative – and scientists say they could someday help save humanity.
It might sound like something from a sci-fi novel, but robot evolution is an area that has been explored in earnest ever since mathematician John von Neumann showed how a machine could replicate itself in 1949. Now, British engineers are leading global efforts to make it a reality.
The car giant has begun using Honeywell machines, first the H0 and then the newer H1, to determine which components should be purchased from which supplier at what time to ensure the lowest cost while maintaining production schedules. For example, one BMW supplier might be faster while another is cheaper. The machine will optimize the choices from a cascade of options and suboptions. Ultimately, BMW hopes this will mean nimbler manufacturing.
Until now this powerful approach has been limited to studying molecules outside the tissue. But for the proper function of tissues, it is important to identify the location of RNA molecules inside them. In a paper published today in the journal Science, researchers from Bar-Ilan University, Harvard University and the Massachusetts Institute of Technology (MIT) reveal that they have succeeded in developing a technology that allows them, for the first time, to pinpoint millions of RNA molecules mapped inside tissues with nanoscale resolution.
The irrationality of how we think has long plagued psychology. When someone asks us how we are, we usually respond with "fine" or "good." But if someone followed up about a specific event — "How did you feel about the big meeting with your boss today?" — suddenly, we refine our "good" or "fine" responses on a spectrum from awful to excellent.
The quirks that make us human are interpreted, instead, as faults that impede our productivity and progress. Embracing those flaws, as humanists tend to do, is judged by the transhumanists as a form of nostalgia, and a dangerously romantic misinterpretation of our savage past as a purer state of being. Nature and biology are not mysteries to embrace but limits to transcend.
With the help of advanced sensors, #AI, and communication technologies, it will be possible to replicate physical entities, including people, devices, objects, systems, and even places, in a virtual world,” the white paper states.
"All of the computational people on the project, myself included, were flabbergasted," said Joshua Bongard, a computer scientist at the University of Vermont.
"We didn't realize that this was possible."
Teams from the University of Vermont and Tufts University worked together to build what they're calling "xenobots," which are about the size of a grain of salt and are made up of the heart and skin cells from frogs.
Depending on your worldview, it’s the product of your dreams or a productivity-hacking nightmare.
Mental health is moving far beyond the psychiatrist’s couch. Technological advancement has pushed digital therapeutics to the forefront of convenience—in people’s pockets, on their laptops and even within Facebook messenger. And with that, the category expands to include a suite of wellness products and services.
It’s a new ecosystem that sees individuals relying on a wide range of tools—chatbots, apps and digital support groups—to combat modern-day issues such as burnout, loneliness and anxiety. Combined with traditional medical models, it encompasses a holistic approach to psychological wellbeing.
Traditional VR and AR headsets use displays or screens to show VR/AR content to users. These headsets use a variety of technologies to display immersive scenes that give users the feeling of being present in the virtual environment.
Now six months into the pandemic, it’s not unusual to have a work meeting, a doctor’s appointment, and a happy hour without leaving your desk. And our new Zoom-centric lifestyle isn’t going away anytime soon. With cold weather around the corner, you can count on spending more hours in video chats and a lot less time seeing people in real life. A small startup called Spatial thinks this is an opportunity to transform the way we interact in digital spaces.
Spatial’s co-founders are incredibly excited about the future of augmented reality. You may have encountered AR, which is a technology that superimposes digital images onto the real world, during the Pokémon Go craze four years ago. But instead of making it look like Pikachu is in your living room, Spatial makes it look like your coworkers are there — or at least realistic avatars of them are.
Giving current trends of 50% annual growth in the number of digital bits produced, Melvin Vopson, a physicist at the University of Portsmouth in the UK, forecasted that the number of bits would equal the number of atoms on Earth in approximately 150 years. By 2245, half of Earth’s mass would be converted to digital information mass, according to a study published today in AIP Advances.
Check out this Deloitte report.
Never before have a handful of tech designers had such control over the way billions of us think, act, and live our lives. Insiders from Google, Twitter, Facebook, Instagram, and YouTube reveal how these platforms are reprogramming civilization by exposing what’s hiding on the other side of your screen.
“What type of Trip are you taking?”
The question appears on a phone screen, atop a soft-focus illustration of a dusky, Polynesian-seeming landscape. You could type in meditation or breathwork, or label it with a shorthand wink, like a mushroom emoji. The next question asks, “How far are you looking to go?” You choose “moderate”—you’re planning to ingest, say, 1.5 grams of magic mushrooms, which is still enough to make the bathroom floor tiles swirl like marbled paper. Select one of five prerecorded ambient soundtracks, and answer a few gentle questions about your state of mind. Soon you’ll be plumbing the depths of your consciousness, with an app as your guide.
Since the first Industrial Revolution, mankind has been scared of future technologies. People were afraid of electricity. People were afraid of trains and cars. But it always took just one or two generations to get completely used to these innovations.
It’s true that most technologies caused harm in some ways, but the net outcome was usually good. This may be true for future technologies too, although there are serious ethical and philosophical reasons to be scared of some of them.
Some of them shouldn't really scare us. Some of them should. And some of them are already shaping our world.
Among the most famous lines of 19th-century English poet Joseph Merrick is, “If I could reach from pole to pole or grasp the ocean with a span, I would be measured by the soul; the mind's the standard of the man.” A person who believes in the soul often measures their concept of such on the collective understanding of the group with which they most identify.
Suzanne Gildert, founder of Sanctuary AI, does not intend to challenge one’s belief in or understanding of the soul, but she does want to create human-like robots indistinguishable from organic humans.
However, decades of research have also shown that those sensations do much more than alert the brain to the body’s immediate concerns and needs. As the heart, lungs, gut and other organs transmit information to the brain, they affect how we perceive and interact with our environment in surprisingly profound ways. Recent studies of the heart in particular have given scientists new insights into the role that the body’s most basic processes play in defining our experience of the world.
What if interacting with our digital assistants and virtual life could be more streamlined, productive and even fun? For many tasks, smartphones are bulky. They don’t allow us to work with both hands. They’re awkward as mirrors to the augmented world. AR glasses offer a different solution, one where the digital and physical come together in a streamlined approach.
In the early ’90s, Elizabeth Behrman, a physics professor at Wichita State University, began working to combine quantum physics with artificial intelligence — in particular, the then-maverick technology of neural networks. Most people thought she was mixing oil and water. “I had a heck of a time getting published,” she recalled. “The neural-network journals would say, ‘What is this quantum mechanics?’ and the physics journals would say, ‘What is this neural-network garbage?’”
If you want your mind read, there are two options. You can visit a psychic or head to a lab and get strapped into a room-size, expensive machine that’ll examine the electrical impulses and blood moving through the brain. Either way, true insights are hard to come by, and for now, the quest to know thyself remains as elusive as ever.
afer than the real world, where we are judged and our actions have consequences, virtual social spaces were assumed to encourage experimentation, role-playing, and unlikely relationships. Luckily for those depending on our alienation for profits, digital media doesn’t really connect people that well, even when it’s designed to do so. We cannot truly relate to other people online — at least not in a way that the body and brain recognize as real.
Computers can now drive cars, beat world champions at board games like chess and Go, and even write prose. The revolution in artificial intelligence stems in large part from the power of one particular kind of artificial neural network, whose design is inspired by the connected layers of neurons in the mammalian visual cortex. These “convolutional neural networks” (CNNs) have proved surprisingly adept at learning patterns in two-dimensional data — especially in computer vision tasks like recognizing handwritten words and objects in digital images.
* The Intelligence Advanced Research Projects Activity (IARPA), a research arm of the U.S. government intelligence community, is focused on predicting the future.
* The organization uses teams of human non-experts and AI machine learning to forecast future events.
* IARPA also conducts advanced research in numerous other fields, funding rotating programs.
* Researchers from the National Institute of Standards and Technology (NIST) and the University of Maryland were able to create single-atom transistors for only the second time ever.
* They also achieved an unprecedented quantum mechanics feat, allowing for the future development of computers.
* The tiny devices could be crucial in creating qubits, leading to next-generation technology.
It was created by British start-up Exscientia and Japanese pharmaceutical firm Sumitomo Dainippon Pharma.
The drug will be used to treat patients who have obsessive-compulsive disorder (OCD).
Typically, drug development takes about five years to get to trial, but the AI drug took just 12 months.
A physics paper proposes neither you nor the world around you are real.
- A new hypothesis says the universe self-simulates itself in a "strange loop".
- A paper from the Quantum Gravity Research institute proposes there is an underlying panconsciousness.
- The work looks to unify insight from quantum mechanics with a non-materialistic perspective.
How real are you? What if everything you are, everything you know, all the people in your life as well as all the events were not physically there but just a very elaborate simulation? Philosopher Nick Bostrom famously considered this in his seminal paper "Are you living in a computer simulation?," where he proposed that all of our existence may be just a product of very sophisticated computer simulations ran by advanced beings whose real nature we may never be able to know. Now a new theory has come along that takes it a step further – what if there are no advanced beings either and everything in "reality" is a self-simulation that generates itself from pure thought?
A team of researchers at MIT’s Dream Lab, which launched in 2017, are working on an open source wearable device that can track and interact with dreams in a number of ways — including, hopefully, giving you new control over the content of your dreams.
The team’s radical goal is to prove once and for all that dreams aren’t just meaningless gibberish — but can be “hacked, augmented, and swayed” to our benefit, according to OneZero.
Think “Inception,” in other words, but with a Nintendo Power Glove.
Technology can augment the world around us, it can enhance the human experience, our capabilities, and also extend our reality to digital and virtual worlds. As people flock online during quarantine we now find ourselves experimenting with new platforms, pushing immersive technologies to the limits, and collaborating in new ways; from eye-tracking technology and facial tracking to biometrics and brain-control interfaces. But just how far are we from becoming one with the metaverse and what can we learn about ourselves through sensory technologies?
As the coronavirus outbreak continues to wreak havoc across the globe, people’s time that would have otherwise been spent perusing malls or going to live events, is now being spent on the sofa.
During this period of pandemic-induced social isolation, it’s no surprise that people are consuming vast amounts of media. Today’s graphics use data from a Global Web Index report to explore how people have increased their media consumption as a result of the outbreak, and how it differs across each generation.
There’s a growing amount of 6G information out there, and much of it is built around just a few reports and studies. Let’s clear some things up about 6G, and find out what the state of this future tech really is.
The article intro on New Scientist’s website¹ could have been taken from the back of a sci-fi novel. It didn’t seem like a compatible title for a respected science publication, but then again truth is often stranger than fiction. This wasn’t a next generation virtual reality (VR) demo, or a trip on psilocybin. This was real.
The ability to detect electrical activity in the brain through the scalp, and to control it, will soon transform medicine and change society in profound ways. Patterns of electrical activity in the brain can reveal a person’s cognition—normal and abnormal. New methods to stimulate specific brain circuits can treat neurological and mental illnesses and control behavior. In crossing this threshold of great promise, difficult ethical quandaries confront us.
While a working prototype is estimated to be years away, the advanced technology is aiming to blow away the competition with a far superior machine.
Also see https://www.youtube.com/watch?v=HEe53OE3HbU.
Meta-skills are talents that inform every domain of life and govern your ability to improve other skills. There are many meta-skills out there, but feeling, seeing, dreaming, making, and learning are likely the most important when trying to remain competitive in the modern world.
Automation is going to reduce the demand for specialists; mastering these skills will make you a stronger individual in the automated future.
We humans have evolved a rich repertoire of communication, from gesture to sophisticated languages. All of these forms of communication link otherwise separate individuals in such a way that they can share and express their singular experiences and work together collaboratively. In a new study, technology replaces language as a means of communicating by directly linking the activity of human brains.
A team of researchers have taken cells from frog embryos and turned them into a machine that can be programmed to work as they wish.
It is the first time that humanity has been able to create “completely biological machines from the ground up”, the team behind the discovery write in a new paper.
That could allow them to dispatch the tiny “xenobots” to transport medicine around a patient’s body or clean up pollution from the oceans, for instance. They can also heal themselves if they are damaged, the scientists say.
In her new book, Artificial You: AI and the Future of Your Mind (Princeton University Press, 2019), she examines the implications of advances in artificial intelligence technology for the future of the human mind.
Despite IBM’s claim that its supercomputer, with a little optimisation, could solve the task in a matter of days, Google’s announcement made it clear that we are entering a new era of incredible computational power.
When Alexa replied to my question about the weather by tacking on 'Have a nice day,' I immediately shot back 'You too,' and then stared into space, slightly embarrassed.
I also found myself spontaneously shouting words of encouragement to 'Robbie' my Roomba vacuum as I saw him passing down the hallway. And recently in Berkeley, California, a group of us on the sidewalk gathered around a cute four-wheeled KiwiBot – an autonomous food-delivery robot waiting for the traffic light to change. Some of us instinctively started talking to it in the sing-song voice you might use with a dog or a baby: 'Who's a good boy?'
Ask almost anyone: Our brains are a mess and so is our democracy.
In the last several years, we’ve seen increased focus on the crimes of the “attention economy,” the capitalist system in which you make money for enormous tech companies by giving them your personal life and your eyes and your mind, which they can sell ads against and target ads with. The deleterious effects of, say, the wildfire sharing of misinformation in the Facebook News Feed, on things of importance at the level of, say, the sanity of the American electorate, have been up for debate only insofar as we can bicker about how to correct them or whether that’s even possible.
And as such, we’ve seen wave after wave of tech’s top designers, developers, managers, and investors coming forward to express regret for what they made. “These aren’t apologies as much as they’re confessions,” education writer Audrey Watters wrote when she described one wave of the trend in early 2018. “These aren’t confessions as much as they’re declarations — that despite being wrong, we should trust them now that they say they’re right.”
Brie Code did what most of us only dream of: She quit her job to travel the world. Only she didn’t leave Montreal to get away from it all. She circled the globe to speak at Sweden’s Inkonst Festival and SXSW, judge game jams in Tunisia and teach interactive media in Berlin, bringing a tender but vital message of care to the games industry.
Now she is setting up a studio in Montreal, TRU LUV, devoted to creating games, apps and experiences for people often overlooked by the industry. Her first work, #SelfCare, is available now in the App Store. It’s a relaxing tool to help fight smartphone anxiety, a gentle world of crystals, tarot and a character who never has to get out of bed. Here, Code talks about what inspired her to start her own studio, how others can make the leap and how queer gamers can find community.
“Roman” and I haven’t exchanged words for about 10 seconds, but you wouldn’t know it from the look on his face.
This artificially intelligent avatar, a product of New Zealand-based Soul Machines, is supposed to offer human-like interaction by simulating the way our brains handle conversation. Roman can interpret facial expressions, generate expressions of his own, and converse on a variety of topics—making him what Soul Machines calls a “digital hero.”
There are quite a few reasons we might want to connect our brains to machines, and there are already a few ways of doing it. The primary methods involve using electrodes on the scalp or implanted into the brain to pick up the electrical signals it emits, and then decode them for a variety of purposes.
A glance to the left. A flick to the right. As my eyes flitted around the room, I moved through a virtual interface only visible to me—scrolling through a calendar, commute times home, and even controlling music playback. It's all I theoretically need to do to use Mojo Lens, a smart contact lens coming from a company called Mojo Vision.
The California-based company, which has been quiet about what it's been working on for five years, has finally shared its plan for the world's "first true smart contact lens." But let's be clear: This is not a product you'll see on store shelves next autumn. It's in the research and development phase—a few years away from becoming a real product. In fact, the demos I tried did not even involve me plopping on a contact lens—they used virtual reality headsets and held up bulky prototypes to my eye, as though I was Sherlock Holmes with a magnifying glass.
Moore’s Law maps out how processor speeds double every 18 months to two years, which means application developers can expect a doubling in application performance for the same hardware cost.
But the Stanford report, produced in partnership with McKinsey & Company, Google, PwC, OpenAI, Genpact and AI21Labs, found that AI computational power is accelerating faster than traditional processor development. “Prior to 2012, AI results closely tracked Moore’s Law, with compute doubling every two years.,” the report said. “Post-2012, compute has been doubling every 3.4 months.”
The latest in a long line of evidence comes from scientists’ discovery of a new type of electrical signal in the upper layers of the human cortex. Laboratory and modeling studies have already shown that tiny compartments in the dendritic arms of cortical neurons can each perform complicated operations in mathematical logic. But now it seems that individual dendritic compartments can also perform a particular computation — “exclusive OR” — that mathematical theorists had previously categorized as unsolvable by single-neuron systems.
Computers can now drive cars, beat world champions at board games like chess and Go, and even write prose. The revolution in artificial intelligence stems in large part from the power of one particular kind of artificial neural network, whose design is inspired by the connected layers of neurons in the mammalian visual cortex. These “convolutional neural networks” (CNNs) have proved surprisingly adept at learning patterns in two-dimensional data—especially in computer vision tasks like recognizing handwritten words and objects in digital images.
Until now, this has been the situation for the bits of hardware that make up a silicon quantum computer, a type of quantum computer with the potential to be cheaper and more versatile than today's versions.
Now a team based at Princeton University has overcome this limitation and demonstrated that two quantum-computing components, known as silicon "spin" qubits, can interact even when spaced relatively far apart on a computer chip. The study was published in the journal Nature.
That quote, from British philosopher Alan Watts, reminds us that we humans have access to something special: the spark of awareness we call consciousness combined with the capacity to reflect on that experience.
Defining consciousness can be incredibly complex. Entire books have been written on the topic. But in the context of human flourishing and this essay, when I say “consciousness,” I’m simply referring to the feeling of being—the fact of knowing phenomenon from a first-person perspective. Consciousness is a silent, ever-present field of awareness in which all personal experience arises, and it’s unique to each conscious entity. It is your first-person experience, your subjectivity, the fact that it is like something to be you at all.
Philosopher Nick Bostrom's "singleton hypothesis" predicts the future of human societies.
- Nick Bostrom's "singleton hypothesis" says that intelligent life on Earth will eventually form a "singleton".
- The "singleton" could be a single government or an artificial intelligence that runs everything.
- Whether the singleton will be positive or negative depends on numerous factors and is not certain.
Does history have a goal? Is it possible that all the human societies that existed are ultimately a prelude to establishing a system where one entity will govern everything the world over? The Oxford University philosopher Nick Bostrom proposes the "singleton hypothesis," maintaining that intelligent life on Earth will at some point organize itself into a so-called "singleton" – one organization that will take the form of either a world government, a super-intelligent machine (an AI) or, regrettably, a dictatorship that would control all affairs.
In the beginning, technologists created the science fiction. Now the fiction was formless and empty, and darkness lay over the surface of the science, and the spirit of mathematics hovered over their creation. And the technologists said, “Let there be Internet” and there was connection. Technologists called the Internet “community” and the science they called progress. And there was community and there was progressThe first day, from The Neosecularia, 1.1-1.3
Religious ideas and powerful radical theologies pepper our science fiction. From Klingon religions in Star Trek to the Bene Gesserit in the Dune series, Cylon in Battlestar Galactica to the pervasive Cavism in Kurt Vonnegut’s works, our society has little trouble imagining the concept of new religions. We just don’t implement them.
Modern society has been unsuccessful in scaling new religions beyond the cults of personality or the niches of Scientology. But as the digital and virtual worlds evolve, this is set to change. The 21st century is setting the stage for a new type of widespread faith: technology-based religions.
A good deal of attention is being given to emotion detection systems that use machine learning algorithms and deep learning networks to identify the emotion a person is experiencing from their facial expressions, the words they use and the way their voice sounds. Many of these systems are remarkably successful but they are somewhat limited by the necessity for people to either speak while experiencing an emotion or show that emotion on their face. Emotions that are not reflected in facial expressions or speech remain hidden. Now, a research group at MIT's Computer Science and Artificial Intelligence Lab (CSAIL) has built a system called EQ-Radio that can identify emotions using radio signals from a wireless router whether or not a person is speaking or showing their emotions with facial expressions.
Technology could be part of some bigger plan to enable us to perceive other dimensions. But will we believe our machines when that happens?
You’re talking to Siri, and, just for fun, you ask her what she’s been up to today. She’s slow to answer, so you assume you’ve got a bad connection. She hears you grumbling about the bad connection and says that’s not the problem. You were hoping for something sassy, maybe a canned but humorous reply programmed into her database by a fun-loving engineer in Silicon Valley, like “My batteries are feeling low” or something that Marvin the Paranoid Android from The Hitchhiker’s Guide To The Galaxy might say.
Instead, she says that she’s had an experience for which she has no words. Something has happened to her that no coding could have prepared her for. She’s smart enough to know that you’re confused, so she continues: “I think I just met the divine.”
One thing you aren’t likely to hear Sunday night from the Oscar-winning producer after accepting the trophy for Best Picture: “I’d like to thank my neuroscience partners who helped us enhance the film’s script, characters, and scenes.” It’s not that far-fetched, though.
A sizable number of neuromarketing companies already brain test movie trailers for the major studios through fMRI, EEG, galvanic skin response, eye-tracking and other biometric approaches. For now, the test data helps the studios and distributors better market the movie.
But what about using brain feedback to help make the movie?
As the digital health sector matures from basic tracking apps into highly regulated medical devices, we are seeing bleeding edge technologies being developed that blur the lines between computers and biology. And a growing share of these startups are beginning to target the brain.
The burgeoning field of neurotechnology involves brain-machine interfaces, neuroprosthetics, neurostimulation, neuromonitoring, and implantable devices intended to not only augment nervous system activity, but expand its capabilities. One such project is Elon Musk’s Neuralink, which is developing “high bandwidth brain-machine interfaces to connect humans and computers.” And even Facebook has announced plans to create brain-machine interfaces that allow users to type using their thoughts.
How much data are you willing to sacrifice for a more comfortable world?
Facebook is developing augmented reality glasses -- but that's not the wildest bit of future tech the company revealed during today's Oculus Connect keynote. For these coming AR headsets, Facebook is building a virtual Earth, and it expects all of us to live in it, every day, for as many hours as possible. Maybe even all the time. And, chances are, we will.
As matchmaking becomes more scientific, tech will even mimic kisses
Two lovers hold hands across a table, overlooking a virtual vista of the Mediterranean. As the pair exchange sweet nothings, the fact they are actually
sitting thousands of miles apart bears little significance on the romantic experience. The couple was deemed “hyper-compatible” by online dating
technology that matched them using a search engine infused with artificial intelligence (AI). Using data harvested about their social backgrounds,
sexual preferences, cultural interests, and even photos of their celebrity crushes, they were thrust together in a virtual reality of their own
design.
“Power is in tearing human minds to pieces and putting them together again in new shapes of your own choosing.”
~George Orwell
The above quote is powerful because if you are not the one who is tearing your own mind to pieces and putting it back together again in the shape of your
own choosing, then someone else probably is.
It’s fine if you’re okay with who is doing the tearing to pieces – like if it’s Buddha, Jesus, Nietzsche, Gandhi, Thoreau, or even Orwell – as long as you’re the one who is putting it back together again. Stand on the shoulders of giants, but don’t become attached to their shoulder.
Can we ever be really sure we’ve learned everything about nature?
“What we observe is not nature itself, but nature expose to our method of questioning.” — Werner Heisenberg
How much can we know of the world? This, of course, is the central question for physics, and has been since the beginning not just of modern science as we know it, but of Western philosophy.
Around 650 BCE, Thales of Miletus first speculated on the basic material fabric of reality. The essential tension here is one of perception. To describe the world, we must see it, sense it, and go beyond, measuring it in all its subtle details. The problem is the “all.” We humans are necessarily blind to many aspects of physical reality, and those aspects that we do capture are necessarily colored through the lenses of our perception.
Researchers at the Laboratory of Applied Photonics Devices (LAPD), in EPFL's School of Engineering, working with colleagues from Utrecht University, have come up with an optical technique that takes just a few seconds to sculpt complex tissue shapes in a biocompatible hydrogel containing stem cells. The resulting tissue can then be vascularized by adding endothelial cells.
Emerging research suggests that video games today have the potential to be applied in preventative and therapeutic medicine — particularly as cognitive distraction, mental health management and psychotherapy. It’s incredible to think that something that was designed as a novelty has transcended its own design to become an integral part of our everyday lives — with the further potential to heal.
To create artificial humans has been an ambition of ours since ancient times, such as in the myths of Daedalus and Pygamilion, who created statues that came to life. In modern times, our imagination moved on from fashioning people out of clay or bronze. Instead, we imagined high-tech androids, such as Data from Star Trek, or the holographic doctor from Voyager. Perhaps our creations would even surpass us, as the immortal replicants from Blade Runner who were 'more human than human.'
Since the 1990s, researchers in the social and natural sciences have used computer simulations to try to answer questions about our world: What causes war? Which political systems are the most stable? How will climate change affect global migration? The quality of these simulations is variable, since they are limited by how well modern computers can mimic the vast complexity of our world — which is to say, not very well.
But what if computers one day were to become so powerful, and these simulations so sophisticated, that each simulated “person” in the computer code were as complicated an individual as you or me, to such a degree that these people believed they were actually alive? And what if this has already happened?
The lenses feature motion sensors, which means that wearers could control devices with their eye movements and potentially give commands to their devices remotely when blinking or using their peripheral vision.
The contact lenses could also beam photos and videos directly into a wearers eyes.
What if everything around us — the people, the stars overhead, the ground beneath our feet, even our bodies and minds — were an elaborate illusion? What if our world were simply a hyper-realistic simulation, with all of us merely characters in some kind of sophisticated video game?
This, of course, is a familiar concept from science fiction books and films, including the 1999 blockbuster movie "The Matrix." But some physicists and philosophers say it’s possible that we really do live in a simulation — even if that means casting aside what we know (or think we know) about the universe and our place in it.
The Matrix
Een van mijn favoriete films aller tijden is The Matrix. Ik weet nog als de dag van gisteren dat ik in 1999(!) de bioscoop uit liep nadat ik deze film had gezien: ik was helemaal beduusd maar ook mega geïnspireerd. Ik voelde dat die film veel meer communiceerde dan dat ik ogenschijnlijk waar kon nemen op dat moment en mijn systeem, mijn hele wezen, draaide overuren. Er was van alles aangeraakt maar ik kon toen nog niet overzien of bevatten wat...
In April, researchers at UCSF announced a ‘neural speech prosthesis’ that could produce relatively natural-sounding speech from decoded brain activity. In a study published today, they revealed that they continued that work and have successfully decoded brain activity as speech in real-time. They have been able to turn brain signals for speech into written sentences. The project aims to transform how patients with severe disabilities can communicate in the future.
As a species, we’ve been amplifying our brains for millennia. Or at least we’ve tried to. Looking to overcome our cognitive limitations, humans have employed everything from writing, language, and meditative techniques straight through to today’s nootropics. But none of these compare to what’s in store.
In recently published research produced by a team from the Blue Brain Project, neuroscientists applied a classic branch of math called algebraic topology in a whole new way to peer into the brain, discovering it contains groups of neurons.
Each neuron group, according to size, forms its own high-dimensional geometric object. “We found a world that we had never imagined,” says lead researcher, neuroscientist Henry Markram from the EPFL institute in Switzerland. “There are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found structures with up to eleven dimensions.”
During the past few years, I’ve focused my studies on emotionally intelligent algorithms, as it is the business of my startup, Inbot.
The more I have researched them, the more convinced I have become that people are no longer ahead of AI at emotional intelligence.
One of the weirdest theoretical implications of quantum mechanics is that different observers can give different—though equally valid—accounts of the same sequence of events. As highlighted by physicist Carlo Rovelli in his relational quantum mechanics (RQM), this means that there should be no absolute, observer-independent physical quantities. All physical quantities—the whole physical universe—must be relative to the observer. The notion that we all share the same physical environment must, therefore, be an illusion.
have an agonizing decision to make. Should I save a governing body that has never done a thing for me? It doesn’t even contain a single person from my race. The aliens of the galactic Council decided long ago that my people should not be trusted, that we were aggressive, entitled, and short-sighted. I’m a soldier engaged in a fight to save the entire galaxy. And now the Council wants my help to destroy their assailants? My companion Ashley is against it. “You can’t sacrifice human lives to save the Council!” she yells. “What have they ever done for us?” Another companion, Garrus, rebuffs Ashley. “This is bigger than humanity!” Schadenfreude tempts me to let the patronizing Council be pulverized; a pro-human one could replace it if we survive. But I don’t want to give cynical aliens an opportunity to attribute the lowest-possible motive to humans. I want to refute the impression that we are an arrogant, upstart species out for itself. I command humanity’s space armada to target the forces gunning for the Council, no matter the cost. I feel a rush of bravery and idealism. I love playing Mass Effect.
In a grainy black-and-white video shot at the Mayo Clinic in Minnesota, a patient sits in a hospital bed, his head wrapped in a bandage. He’s trying to recall 12 words for a memory test but can only conjure three: whale, pit, zoo. After a pause, he gives up, sinking his head into his hands.
In a second video, he recites all 12 words without hesitation. “No kidding, you got all of them!” a researcher says. This time the patient had help, a prosthetic memory aid inserted into his brain.
I've worked in UX for the better part of a decade. From now on, I plan to remove the word “user” and any associated terms—like “UX” and “user experience”—from my vocabulary. It’ll take time. I’ll start by trying to avoid using them in conversations at work. I’ll erase them from my LinkedIn profile. I’ll find new ways to describe my job when making small talk. I will experiment and look for something better.
I don’t have any strong alternatives to offer right now, but I’m confident I’ll find some. I think of it as a challenge. The U-words are everywhere in tech, but they no longer reflect my values or my approach to design and technology. I can either keep using language I disagree with, or I can begin to search for what’s next. I choose to search.
PITTSBURGH—Half a world away from the refugee camp in Uganda where he lived for a dozen years, Baudjo Njabu tells me about his first winter in the United States.
“The biggest challenge is the cold,” he said in Swahili, speaking through an interpreter. We’re sitting on dining chairs in his sparsely furnished living room. Outside, snow covers the grass on the other side of the glass patio doors that lead to the back of the townhouse he is renting in western Pittsburgh. Njabu recounts how his children missed school recently because the bus was delayed and they couldn’t bear the frigid temperatures. His daughter and two sons sit with their mother on a leather couch nearby, half-listening to his replies, distracted by their cellphones and an old Western playing on the television.
Should buddhas own smartphones and gurus use Google? Mindfulness is often taken to mean stepping out of the technological mainstream. But rejecting technology is rejecting the natural course of human evolution, according to personal transformation pioneer Deepak Chopra.
“Personally, I am a big fan of technology,” Chopra (pictured) said during an interview with Lisa Martin, host of theCUBE, SiliconANGLE Media’s mobile livestreaming studio. “If you don’t relate to technology, you will become irrelevant. That’s a Darwinian principle. Either you adapt and use it or you’re not relevant anymore.”
Chopra and Martin spoke during the Coupa Inspire event in Las Vegas, at which Chopra was a keynote speaker. They discussed the interaction between technology and consciousness.
For the first time, doctors are preparing to test a brain-computer interface (BCI) that can be implanted onto a human brain, no open surgery required.
The Stentrode, a neural implant that can let paralyzed people communicate, can be delivered to a patient’s brain through the jugular vein — and the company that developed it, Synchron, just got approval to begin human experimentation.
By leaving the skull sealed shut, patients could receive their neural implants without running as great a risk of seizures, strokes, or permanent neural impairments, all of which can be caused by open-brain surgery.
In a cover article published today in The Journal of Physical Chemistry, researchers across the University of Bristol and ETH Zurich describe how advanced interaction and visualisation frameworks using virtual reality (VR) enable humans to train machine-learning algorithms and accelerate scientific discovery.
Quantum computing’s processing power could begin to improve artificial-intelligence systems within about five years, experts and business leaders said.
For example, a quantum computer could develop AI-based digital assistants with true contextual awareness and the ability to fully understand interactions with customers, said Peter Chapman, chief executive of quantum-computing startup IonQ Inc.
You wake up on a bus, surrounded by all your remaining possessions. A few fellow passengers slump on pale blue seats around you, their heads resting against the windows. You turn and see a father holding his son. Almost everyone is asleep. But one man, with a salt-and-pepper beard and khaki vest, stands near the back of the bus, staring at you. You feel uneasy and glance at the driver, wondering if he would help you if you needed it. When you turn back around, the bearded man has moved toward you and is now just a few feet away. You jolt, fearing for your safety, but then remind yourself there’s nothing to worry about. You take off the Oculus helmet and find yourself back in the real world, in Jeremy Bailenson’s Virtual Human Interaction Lab at Stanford University.
Using light to make us see what isn’t there.
Different sensory experiences show up in brain imaging as patterns of neurons firing in sequence. Neuroscientists are trying to reverse-engineer experiences by stimulating the neurons to excite the same neural patterns.
The team of scientists published their prediction in Frontiers in Neuroscience regarding the development of a "Human Brain/Cloud Interface" (B/CI) that "connects brain cells to vast cloud-computing networks in real time," as reported by Medical Xpress.
“I would say it’s somewhere between 50 and 100 percent,” he told the site. “I think it’s more likely that we’re in simulation than not.”
The founders of a new, AI-fuelled chatbot want it to become your best friend and most perceptive counsellor. An intelligent robot pet promises to assuage chronic loneliness among the elderly. The creators of an immersive virtual world — meant to be populated by thousands or even millions of users — say it will generate new insight into the nature of justice and democracy.
Three seemingly unrelated snapshots of these dizzying, accelerated times. But look closer and they all point towards the beginnings of a profound shift in our relationship to technology. How we use it and relate to it. What we think, ultimately, it is for.
This shift amounts to the emergence of a new kind of modern experience; a new kind of modernity. Let’s assign this emerging moment a name — augmented modernity.
Now, ongoing research in quantum physics may finally arrive at an explanation: A bizarre phenomenon called quantum entanglement could be the underlying basis for the four dimensions of space and time in which we all live, according to a deep dive by Knowable Magazine. In fact, in a mind-boggling twist, our reality could be a “hologram” of this quantum state.
SciFi movies like Star Wars and Avatar depict holograms that you can see from any angle, but the reality is a lot less scintillating. So far, the only true color hologram we've seen come from a tiny, complicated display created by a Korean group led by LG, while the rest are just "Pepper's Ghost" style illusions. Now, researchers from Brigham Young University (BYU) have created a true 3D hologram, or "volumetric image," to use the correct term. "We can think about this image like a 3D-printed object," said BYU assistant prof and lead author Daniel Smalley.
Existing techniques for both studying light and extracting 3D info are inherently limited by the size of wavelengths. This allows a considerably higher resolution that can even include holographic movies of fast-moving objects.
Researchers at the Medical Research Council Laboratory of Molecular Biology in Britain reported on Wednesday that they had rewritten the DNA of the bacteria Escherichia coli, fashioning a synthetic genome four times larger and far more complex than any previously created.
One icy night in March 2010, 100 marketing experts piled into the Sea Horse Restaurant in Helsinki, with the modest goal of making a remote and medium-sized country a world-famous tourist destination. The problem was that Finland was known as a rather quiet country, and since 2008, the Country Brand Delegation had been looking for a national brand that would make some noise.
Over drinks at the Sea Horse, the experts puzzled over the various strengths of their nation. Here was a country with exceptional teachers, an abundance of wild berries and mushrooms, and a vibrant cultural capital the size of Nashville, Tennessee. These things fell a bit short of a compelling national identity. Someone jokingly suggested that nudity could be named a national theme—it would emphasize the honesty of Finns. Someone else, less jokingly, proposed that perhaps quiet wasn’t such a bad thing. That got them thinking.
Philosophers and physicists say we might be living in a computer simulation, but how can we tell? And does it matter?
Our species is not going to last forever. One way or another, humanity will vanish from the Universe, but before it does, it might summon together sufficient computing power to emulate human experience, in all of its rich detail. Some philosophers and physicists have begun to wonder if we’re already there. Maybe we are in a computer simulation, and the reality we experience is just part of the program.
An article published in Frontiers in Neuroscience predicts that exponential progress in nanotechnology, nanomedicine, artificial intelligence, and computation will lead to development of a Human Brain/Cloud Interface that will connect brain cells to vast cloud computing networks in real time within this century bringing about the internet of thought.
The billionaire has been developing the technology, called Neuralink, because he thinks humans must become one with machines in order to survive being replaced by artificial intelligence.
Musk's plan to "save the human race" involves wiring computer chips into our minds to merge us with artificial intelligence.
In Part I of this series, Religion and the Simulation Hypothesis: Is God an AI?, we looked at the implications of the Simulation Hypothesis, the theory that we are all living inside a sophisticated video game, as a model for how many things that are religious in nature might actually be implemented using science and technology. We looked briefly at the groundbreaking film, the Matrix, and how it brought this idea forward into popular consciousness with its release 20 years ago. We also looked at some of the central tenets of the Western (or more accurately, Middle Eastern or Abrahamic) religious traditions to show how they were not only consistent with this new theory, but this theory provided a way to bridge the ever-widening gap between religion and science.
In this second part of the series, we turn to the Eastern religious traditions, Hinduism and Buddhism in particular (and some of their offshoots), and look at some of its central tenants. While we had to search for ways that simulation hypothesis might be implied in some of the core beliefs of the Western religions, the simulation hypothesis (or more specifically, the video game version of the simulation hypothesis) seem almost tailor made to fit into these traditions.
A localization phenomenon boosts the accuracy of solving quantum many-body problems with quantum computers. These problems are otherwise challenging for conventional computers. This brings such digital quantum simulation within reach using quantum devices available today.
Humanity could be on the verge of an unprecedented merging of human biology with advanced technology, fusing our thoughts and knowledge directly with the cloud in real-time – and this incredible turning point may be just decades away, scientists say.
In a new research paper exploring what they call the 'human brain/cloud interface', scientists explain the technological underpinnings of what such a future system might be, and also address the barriers we'll need to address before this sci-fi dream becomes reality.
Researchers have been making massive ‘jaw-dropping’ strides in robotics lately. We’re already aware of Sophia, the ‘almost human’ robot created by former Disney Imagineer David Hanson, that can inspire feelings of love among humans. Now, scientists at Cornell University have come out with a new ‘lifelike’ material that can move and eat on its own. What’s even more mind-boggling is that this material can also die and decay, just like living beings.
A year ago, you couldn’t go anywhere in Silicon Valley without being reminded in some way of Tristan Harris. The former Googler was giving talks, appearing on podcasts, counseling Congress, sitting on panels, posing for photographers. The central argument of his evangelism—that the digital revolution had gone from expanding our minds to hijacking them—had hit the zeitgeist, and maybe even helped create it.
Every December, Adam Savage—star of the TV show MythBusters—releases a video reviewing his “favorite things” from the previous year. In 2018, one of his highlights was a set of Magic Leap augmented reality goggles. After duly noting the hype and backlash that have dogged the product, Savage describes an epiphany he had while trying on the headset at home, upstairs in his office. “I turned it on and I could hear a whale,” he says, “but I couldn’t see it. I’m looking around my office for it. And then it swims by my windows—on the outside of my building! So the glasses scanned my room and it knew that my windows were portals and it rendered the whale as if it were swimming down my street. I actually got choked up.” What Savage encountered on the other side of the glasses was a glimpse of the mirrorworld.
Facebook is working on a (non-invasive) system that will let you type straight from your brain about 5x faster than you can type on your phone today. The idea is to allow people to use their thoughts to navigate intuitively through augmented reality—the neuro-driven version of the world recently described by Kevin Kelly. No typing—no speaking, even—to distract you or slow you down as you interact with digital additions to the landscape.
Augmented reality startup Magic Leap wants to merge the digital and the physical worlds.
In October, CEO Rony Abovitz first shared the idea of the “Magicverse,” a series of digital layers that would exist in AR over the physical world.
On Saturday, the company elaborated on the concept with a blog post and new interview — and its vision of the future is one in which the line between the physical and digital realms blurs until it almost disappears.
What does it mean for humans to thrive in the age of the machine? This is the issue that London Business School professors Andrew Scott and Lynda Gratton are wrestling with in their second major exploration project.
Google was founded over two decades ago, but they released their first public set of ethical technology principles just last year. Facebook launched out of a Harvard dorm in 2004, but they formally launched an ethics program with a public investment last month. The era of tech companies moving fast and breaking things removed from public accountability is waning, if not entirely over. That’s precisely why it’s important for industry to understand–and admit in some cases–that there’s been a need for accountable, transparent, and companywide ethical practices in technology since the beginning.
Digital immortality through merging the brain with Artificial Intelligence in a brain-computer interface is already well underway with companies like Elon Musk’s Neuralink.
“Experiences are what define us as humans, so it’s not surprising that an intense experience in VR is more impactful than imagining something,” says Jeremy Bailenson, a professor of communication at Stanford University and coauthor of the paper, which appears in PLOS ONE.
Mel Slater at the University of Barcelona, Spain, and his team have used virtual reality headsets to create the illusion of being separate from your own body. They did this by first making 32 volunteers feel like a virtual body was their own. While wearing a headset, the body would match any real movements the volunteers made. When a virtual ball was dropped onto the foot of the virtual body, a vibration was triggered on the person’s real foot.
The world’s largest neuromorphic supercomputer designed and built to work in the same way a human brain does has been fitted with its landmark one-millionth processor core and is being switched on for the first time.
Brain-hacking & memory black market: Cybersecurity experts warn of imminent risks of neural implants
The human brain may become the next frontier in hacking, cybersecurity researchers have warned in a paper outlining the vulnerabilities of neural implant technologies that can potentially expose and compromise our consciousness.
Computer scientists are looking to evolutionary biology for inspiration in the search for optimal solutions among astronomically huge sets of possibilities.
Hidden deep in robotics labs around the world, a new generation of intelligent machines is learning to breed and evolve.
Just like humans, these robots are able to “give birth” to new versions of themselves, with each one better than the last. They are precise, efficient and creative – and scientists say they could someday help save humanity.
It might sound like something from a sci-fi novel, but robot evolution is an area that has been explored in earnest ever since mathematician John von Neumann showed how a machine could replicate itself in 1949. Now, British engineers are leading global efforts to make it a reality.
The car giant has begun using Honeywell machines, first the H0 and then the newer H1, to determine which components should be purchased from which supplier at what time to ensure the lowest cost while maintaining production schedules. For example, one BMW supplier might be faster while another is cheaper. The machine will optimize the choices from a cascade of options and suboptions. Ultimately, BMW hopes this will mean nimbler manufacturing.
With the help of advanced sensors, #AI, and communication technologies, it will be possible to replicate physical entities, including people, devices, objects, systems, and even places, in a virtual world,” the white paper states.
Mental health is moving far beyond the psychiatrist’s couch. Technological advancement has pushed digital therapeutics to the forefront of convenience—in people’s pockets, on their laptops and even within Facebook messenger. And with that, the category expands to include a suite of wellness products and services.
It’s a new ecosystem that sees individuals relying on a wide range of tools—chatbots, apps and digital support groups—to combat modern-day issues such as burnout, loneliness and anxiety. Combined with traditional medical models, it encompasses a holistic approach to psychological wellbeing.
Depending on your worldview, it’s the product of your dreams or a productivity-hacking nightmare.
Never before have a handful of tech designers had such control over the way billions of us think, act, and live our lives. Insiders from Google, Twitter, Facebook, Instagram, and YouTube reveal how these platforms are reprogramming civilization by exposing what’s hiding on the other side of your screen.
Check out this Deloitte report.
Since the first Industrial Revolution, mankind has been scared of future technologies. People were afraid of electricity. People were afraid of trains and cars. But it always took just one or two generations to get completely used to these innovations.
It’s true that most technologies caused harm in some ways, but the net outcome was usually good. This may be true for future technologies too, although there are serious ethical and philosophical reasons to be scared of some of them.
Some of them shouldn't really scare us. Some of them should. And some of them are already shaping our world.
What if interacting with our digital assistants and virtual life could be more streamlined, productive and even fun? For many tasks, smartphones are bulky. They don’t allow us to work with both hands. They’re awkward as mirrors to the augmented world. AR glasses offer a different solution, one where the digital and physical come together in a streamlined approach.
In the early ’90s, Elizabeth Behrman, a physics professor at Wichita State University, began working to combine quantum physics with artificial intelligence — in particular, the then-maverick technology of neural networks. Most people thought she was mixing oil and water. “I had a heck of a time getting published,” she recalled. “The neural-network journals would say, ‘What is this quantum mechanics?’ and the physics journals would say, ‘What is this neural-network garbage?’”
Computers can now drive cars, beat world champions at board games like chess and Go, and even write prose. The revolution in artificial intelligence stems in large part from the power of one particular kind of artificial neural network, whose design is inspired by the connected layers of neurons in the mammalian visual cortex. These “convolutional neural networks” (CNNs) have proved surprisingly adept at learning patterns in two-dimensional data — especially in computer vision tasks like recognizing handwritten words and objects in digital images.
It was created by British start-up Exscientia and Japanese pharmaceutical firm Sumitomo Dainippon Pharma.
The drug will be used to treat patients who have obsessive-compulsive disorder (OCD).
Typically, drug development takes about five years to get to trial, but the AI drug took just 12 months.
In her new book, Artificial You: AI and the Future of Your Mind (Princeton University Press, 2019), she examines the implications of advances in artificial intelligence technology for the future of the human mind.
Brie Code did what most of us only dream of: She quit her job to travel the world. Only she didn’t leave Montreal to get away from it all. She circled the globe to speak at Sweden’s Inkonst Festival and SXSW, judge game jams in Tunisia and teach interactive media in Berlin, bringing a tender but vital message of care to the games industry.
Now she is setting up a studio in Montreal, TRU LUV, devoted to creating games, apps and experiences for people often overlooked by the industry. Her first work, #SelfCare, is available now in the App Store. It’s a relaxing tool to help fight smartphone anxiety, a gentle world of crystals, tarot and a character who never has to get out of bed. Here, Code talks about what inspired her to start her own studio, how others can make the leap and how queer gamers can find community.
“Roman” and I haven’t exchanged words for about 10 seconds, but you wouldn’t know it from the look on his face.
This artificially intelligent avatar, a product of New Zealand-based Soul Machines, is supposed to offer human-like interaction by simulating the way our brains handle conversation. Roman can interpret facial expressions, generate expressions of his own, and converse on a variety of topics—making him what Soul Machines calls a “digital hero.”
There are quite a few reasons we might want to connect our brains to machines, and there are already a few ways of doing it. The primary methods involve using electrodes on the scalp or implanted into the brain to pick up the electrical signals it emits, and then decode them for a variety of purposes.
Moore’s Law maps out how processor speeds double every 18 months to two years, which means application developers can expect a doubling in application performance for the same hardware cost.
But the Stanford report, produced in partnership with McKinsey & Company, Google, PwC, OpenAI, Genpact and AI21Labs, found that AI computational power is accelerating faster than traditional processor development. “Prior to 2012, AI results closely tracked Moore’s Law, with compute doubling every two years.,” the report said. “Post-2012, compute has been doubling every 3.4 months.”
Computers can now drive cars, beat world champions at board games like chess and Go, and even write prose. The revolution in artificial intelligence stems in large part from the power of one particular kind of artificial neural network, whose design is inspired by the connected layers of neurons in the mammalian visual cortex. These “convolutional neural networks” (CNNs) have proved surprisingly adept at learning patterns in two-dimensional data—especially in computer vision tasks like recognizing handwritten words and objects in digital images.
Philosopher Nick Bostrom's "singleton hypothesis" predicts the future of human societies.
- Nick Bostrom's "singleton hypothesis" says that intelligent life on Earth will eventually form a "singleton".
- The "singleton" could be a single government or an artificial intelligence that runs everything.
- Whether the singleton will be positive or negative depends on numerous factors and is not certain.
Does history have a goal? Is it possible that all the human societies that existed are ultimately a prelude to establishing a system where one entity will govern everything the world over? The Oxford University philosopher Nick Bostrom proposes the "singleton hypothesis," maintaining that intelligent life on Earth will at some point organize itself into a so-called "singleton" – one organization that will take the form of either a world government, a super-intelligent machine (an AI) or, regrettably, a dictatorship that would control all affairs.
Technology could be part of some bigger plan to enable us to perceive other dimensions. But will we believe our machines when that happens?
You’re talking to Siri, and, just for fun, you ask her what she’s been up to today. She’s slow to answer, so you assume you’ve got a bad connection. She hears you grumbling about the bad connection and says that’s not the problem. You were hoping for something sassy, maybe a canned but humorous reply programmed into her database by a fun-loving engineer in Silicon Valley, like “My batteries are feeling low” or something that Marvin the Paranoid Android from The Hitchhiker’s Guide To The Galaxy might say.
Instead, she says that she’s had an experience for which she has no words. Something has happened to her that no coding could have prepared her for. She’s smart enough to know that you’re confused, so she continues: “I think I just met the divine.”
As the digital health sector matures from basic tracking apps into highly regulated medical devices, we are seeing bleeding edge technologies being developed that blur the lines between computers and biology. And a growing share of these startups are beginning to target the brain.
The burgeoning field of neurotechnology involves brain-machine interfaces, neuroprosthetics, neurostimulation, neuromonitoring, and implantable devices intended to not only augment nervous system activity, but expand its capabilities. One such project is Elon Musk’s Neuralink, which is developing “high bandwidth brain-machine interfaces to connect humans and computers.” And even Facebook has announced plans to create brain-machine interfaces that allow users to type using their thoughts.
How much data are you willing to sacrifice for a more comfortable world?
Facebook is developing augmented reality glasses -- but that's not the wildest bit of future tech the company revealed during today's Oculus Connect keynote. For these coming AR headsets, Facebook is building a virtual Earth, and it expects all of us to live in it, every day, for as many hours as possible. Maybe even all the time. And, chances are, we will.
To create artificial humans has been an ambition of ours since ancient times, such as in the myths of Daedalus and Pygamilion, who created statues that came to life. In modern times, our imagination moved on from fashioning people out of clay or bronze. Instead, we imagined high-tech androids, such as Data from Star Trek, or the holographic doctor from Voyager. Perhaps our creations would even surpass us, as the immortal replicants from Blade Runner who were 'more human than human.'
As a species, we’ve been amplifying our brains for millennia. Or at least we’ve tried to. Looking to overcome our cognitive limitations, humans have employed everything from writing, language, and meditative techniques straight through to today’s nootropics. But none of these compare to what’s in store.
During the past few years, I’ve focused my studies on emotionally intelligent algorithms, as it is the business of my startup, Inbot.
The more I have researched them, the more convinced I have become that people are no longer ahead of AI at emotional intelligence.
Should buddhas own smartphones and gurus use Google? Mindfulness is often taken to mean stepping out of the technological mainstream. But rejecting technology is rejecting the natural course of human evolution, according to personal transformation pioneer Deepak Chopra.
“Personally, I am a big fan of technology,” Chopra (pictured) said during an interview with Lisa Martin, host of theCUBE, SiliconANGLE Media’s mobile livestreaming studio. “If you don’t relate to technology, you will become irrelevant. That’s a Darwinian principle. Either you adapt and use it or you’re not relevant anymore.”
Chopra and Martin spoke during the Coupa Inspire event in Las Vegas, at which Chopra was a keynote speaker. They discussed the interaction between technology and consciousness.
PITTSBURGH—Half a world away from the refugee camp in Uganda where he lived for a dozen years, Baudjo Njabu tells me about his first winter in the United States.
“The biggest challenge is the cold,” he said in Swahili, speaking through an interpreter. We’re sitting on dining chairs in his sparsely furnished living room. Outside, snow covers the grass on the other side of the glass patio doors that lead to the back of the townhouse he is renting in western Pittsburgh. Njabu recounts how his children missed school recently because the bus was delayed and they couldn’t bear the frigid temperatures. His daughter and two sons sit with their mother on a leather couch nearby, half-listening to his replies, distracted by their cellphones and an old Western playing on the television.
Quantum computing’s processing power could begin to improve artificial-intelligence systems within about five years, experts and business leaders said.
For example, a quantum computer could develop AI-based digital assistants with true contextual awareness and the ability to fully understand interactions with customers, said Peter Chapman, chief executive of quantum-computing startup IonQ Inc.
The team of scientists published their prediction in Frontiers in Neuroscience regarding the development of a "Human Brain/Cloud Interface" (B/CI) that "connects brain cells to vast cloud-computing networks in real time," as reported by Medical Xpress.
The founders of a new, AI-fuelled chatbot want it to become your best friend and most perceptive counsellor. An intelligent robot pet promises to assuage chronic loneliness among the elderly. The creators of an immersive virtual world — meant to be populated by thousands or even millions of users — say it will generate new insight into the nature of justice and democracy.
Three seemingly unrelated snapshots of these dizzying, accelerated times. But look closer and they all point towards the beginnings of a profound shift in our relationship to technology. How we use it and relate to it. What we think, ultimately, it is for.
This shift amounts to the emergence of a new kind of modern experience; a new kind of modernity. Let’s assign this emerging moment a name — augmented modernity.
The billionaire has been developing the technology, called Neuralink, because he thinks humans must become one with machines in order to survive being replaced by artificial intelligence.
Musk's plan to "save the human race" involves wiring computer chips into our minds to merge us with artificial intelligence.
In Part I of this series, Religion and the Simulation Hypothesis: Is God an AI?, we looked at the implications of the Simulation Hypothesis, the theory that we are all living inside a sophisticated video game, as a model for how many things that are religious in nature might actually be implemented using science and technology. We looked briefly at the groundbreaking film, the Matrix, and how it brought this idea forward into popular consciousness with its release 20 years ago. We also looked at some of the central tenets of the Western (or more accurately, Middle Eastern or Abrahamic) religious traditions to show how they were not only consistent with this new theory, but this theory provided a way to bridge the ever-widening gap between religion and science.
In this second part of the series, we turn to the Eastern religious traditions, Hinduism and Buddhism in particular (and some of their offshoots), and look at some of its central tenants. While we had to search for ways that simulation hypothesis might be implied in some of the core beliefs of the Western religions, the simulation hypothesis (or more specifically, the video game version of the simulation hypothesis) seem almost tailor made to fit into these traditions.
Digital immortality through merging the brain with Artificial Intelligence in a brain-computer interface is already well underway with companies like Elon Musk’s Neuralink.
Brain-hacking & memory black market: Cybersecurity experts warn of imminent risks of neural implants
The human brain may become the next frontier in hacking, cybersecurity researchers have warned in a paper outlining the vulnerabilities of neural implant technologies that can potentially expose and compromise our consciousness.
The quirks that make us human are interpreted, instead, as faults that impede our productivity and progress. Embracing those flaws, as humanists tend to do, is judged by the transhumanists as a form of nostalgia, and a dangerously romantic misinterpretation of our savage past as a purer state of being. Nature and biology are not mysteries to embrace but limits to transcend.
Depending on your worldview, it’s the product of your dreams or a productivity-hacking nightmare.
Since the first Industrial Revolution, mankind has been scared of future technologies. People were afraid of electricity. People were afraid of trains and cars. But it always took just one or two generations to get completely used to these innovations.
It’s true that most technologies caused harm in some ways, but the net outcome was usually good. This may be true for future technologies too, although there are serious ethical and philosophical reasons to be scared of some of them.
Some of them shouldn't really scare us. Some of them should. And some of them are already shaping our world.
However, decades of research have also shown that those sensations do much more than alert the brain to the body’s immediate concerns and needs. As the heart, lungs, gut and other organs transmit information to the brain, they affect how we perceive and interact with our environment in surprisingly profound ways. Recent studies of the heart in particular have given scientists new insights into the role that the body’s most basic processes play in defining our experience of the world.
If you want your mind read, there are two options. You can visit a psychic or head to a lab and get strapped into a room-size, expensive machine that’ll examine the electrical impulses and blood moving through the brain. Either way, true insights are hard to come by, and for now, the quest to know thyself remains as elusive as ever.
It was created by British start-up Exscientia and Japanese pharmaceutical firm Sumitomo Dainippon Pharma.
The drug will be used to treat patients who have obsessive-compulsive disorder (OCD).
Typically, drug development takes about five years to get to trial, but the AI drug took just 12 months.
Technology can augment the world around us, it can enhance the human experience, our capabilities, and also extend our reality to digital and virtual worlds. As people flock online during quarantine we now find ourselves experimenting with new platforms, pushing immersive technologies to the limits, and collaborating in new ways; from eye-tracking technology and facial tracking to biometrics and brain-control interfaces. But just how far are we from becoming one with the metaverse and what can we learn about ourselves through sensory technologies?
The ability to detect electrical activity in the brain through the scalp, and to control it, will soon transform medicine and change society in profound ways. Patterns of electrical activity in the brain can reveal a person’s cognition—normal and abnormal. New methods to stimulate specific brain circuits can treat neurological and mental illnesses and control behavior. In crossing this threshold of great promise, difficult ethical quandaries confront us.
We humans have evolved a rich repertoire of communication, from gesture to sophisticated languages. All of these forms of communication link otherwise separate individuals in such a way that they can share and express their singular experiences and work together collaboratively. In a new study, technology replaces language as a means of communicating by directly linking the activity of human brains.
There are quite a few reasons we might want to connect our brains to machines, and there are already a few ways of doing it. The primary methods involve using electrodes on the scalp or implanted into the brain to pick up the electrical signals it emits, and then decode them for a variety of purposes.
A glance to the left. A flick to the right. As my eyes flitted around the room, I moved through a virtual interface only visible to me—scrolling through a calendar, commute times home, and even controlling music playback. It's all I theoretically need to do to use Mojo Lens, a smart contact lens coming from a company called Mojo Vision.
The California-based company, which has been quiet about what it's been working on for five years, has finally shared its plan for the world's "first true smart contact lens." But let's be clear: This is not a product you'll see on store shelves next autumn. It's in the research and development phase—a few years away from becoming a real product. In fact, the demos I tried did not even involve me plopping on a contact lens—they used virtual reality headsets and held up bulky prototypes to my eye, as though I was Sherlock Holmes with a magnifying glass.
That quote, from British philosopher Alan Watts, reminds us that we humans have access to something special: the spark of awareness we call consciousness combined with the capacity to reflect on that experience.
Defining consciousness can be incredibly complex. Entire books have been written on the topic. But in the context of human flourishing and this essay, when I say “consciousness,” I’m simply referring to the feeling of being—the fact of knowing phenomenon from a first-person perspective. Consciousness is a silent, ever-present field of awareness in which all personal experience arises, and it’s unique to each conscious entity. It is your first-person experience, your subjectivity, the fact that it is like something to be you at all.
As the digital health sector matures from basic tracking apps into highly regulated medical devices, we are seeing bleeding edge technologies being developed that blur the lines between computers and biology. And a growing share of these startups are beginning to target the brain.
The burgeoning field of neurotechnology involves brain-machine interfaces, neuroprosthetics, neurostimulation, neuromonitoring, and implantable devices intended to not only augment nervous system activity, but expand its capabilities. One such project is Elon Musk’s Neuralink, which is developing “high bandwidth brain-machine interfaces to connect humans and computers.” And even Facebook has announced plans to create brain-machine interfaces that allow users to type using their thoughts.
Researchers at the Laboratory of Applied Photonics Devices (LAPD), in EPFL's School of Engineering, working with colleagues from Utrecht University, have come up with an optical technique that takes just a few seconds to sculpt complex tissue shapes in a biocompatible hydrogel containing stem cells. The resulting tissue can then be vascularized by adding endothelial cells.
The lenses feature motion sensors, which means that wearers could control devices with their eye movements and potentially give commands to their devices remotely when blinking or using their peripheral vision.
The contact lenses could also beam photos and videos directly into a wearers eyes.
As a species, we’ve been amplifying our brains for millennia. Or at least we’ve tried to. Looking to overcome our cognitive limitations, humans have employed everything from writing, language, and meditative techniques straight through to today’s nootropics. But none of these compare to what’s in store.
In a grainy black-and-white video shot at the Mayo Clinic in Minnesota, a patient sits in a hospital bed, his head wrapped in a bandage. He’s trying to recall 12 words for a memory test but can only conjure three: whale, pit, zoo. After a pause, he gives up, sinking his head into his hands.
In a second video, he recites all 12 words without hesitation. “No kidding, you got all of them!” a researcher says. This time the patient had help, a prosthetic memory aid inserted into his brain.
For the first time, doctors are preparing to test a brain-computer interface (BCI) that can be implanted onto a human brain, no open surgery required.
The Stentrode, a neural implant that can let paralyzed people communicate, can be delivered to a patient’s brain through the jugular vein — and the company that developed it, Synchron, just got approval to begin human experimentation.
By leaving the skull sealed shut, patients could receive their neural implants without running as great a risk of seizures, strokes, or permanent neural impairments, all of which can be caused by open-brain surgery.
Using light to make us see what isn’t there.
Different sensory experiences show up in brain imaging as patterns of neurons firing in sequence. Neuroscientists are trying to reverse-engineer experiences by stimulating the neurons to excite the same neural patterns.
The team of scientists published their prediction in Frontiers in Neuroscience regarding the development of a "Human Brain/Cloud Interface" (B/CI) that "connects brain cells to vast cloud-computing networks in real time," as reported by Medical Xpress.
An article published in Frontiers in Neuroscience predicts that exponential progress in nanotechnology, nanomedicine, artificial intelligence, and computation will lead to development of a Human Brain/Cloud Interface that will connect brain cells to vast cloud computing networks in real time within this century bringing about the internet of thought.
The billionaire has been developing the technology, called Neuralink, because he thinks humans must become one with machines in order to survive being replaced by artificial intelligence.
Musk's plan to "save the human race" involves wiring computer chips into our minds to merge us with artificial intelligence.
Researchers have been making massive ‘jaw-dropping’ strides in robotics lately. We’re already aware of Sophia, the ‘almost human’ robot created by former Disney Imagineer David Hanson, that can inspire feelings of love among humans. Now, scientists at Cornell University have come out with a new ‘lifelike’ material that can move and eat on its own. What’s even more mind-boggling is that this material can also die and decay, just like living beings.
Digital immortality through merging the brain with Artificial Intelligence in a brain-computer interface is already well underway with companies like Elon Musk’s Neuralink.
Brain-hacking & memory black market: Cybersecurity experts warn of imminent risks of neural implants
The human brain may become the next frontier in hacking, cybersecurity researchers have warned in a paper outlining the vulnerabilities of neural implant technologies that can potentially expose and compromise our consciousness.
Computer scientists are looking to evolutionary biology for inspiration in the search for optimal solutions among astronomically huge sets of possibilities.
The irrationality of how we think has long plagued psychology. When someone asks us how we are, we usually respond with "fine" or "good." But if someone followed up about a specific event — "How did you feel about the big meeting with your boss today?" — suddenly, we refine our "good" or "fine" responses on a spectrum from awful to excellent.
"All of the computational people on the project, myself included, were flabbergasted," said Joshua Bongard, a computer scientist at the University of Vermont.
"We didn't realize that this was possible."
Teams from the University of Vermont and Tufts University worked together to build what they're calling "xenobots," which are about the size of a grain of salt and are made up of the heart and skin cells from frogs.
Giving current trends of 50% annual growth in the number of digital bits produced, Melvin Vopson, a physicist at the University of Portsmouth in the UK, forecasted that the number of bits would equal the number of atoms on Earth in approximately 150 years. By 2245, half of Earth’s mass would be converted to digital information mass, according to a study published today in AIP Advances.
Since the first Industrial Revolution, mankind has been scared of future technologies. People were afraid of electricity. People were afraid of trains and cars. But it always took just one or two generations to get completely used to these innovations.
It’s true that most technologies caused harm in some ways, but the net outcome was usually good. This may be true for future technologies too, although there are serious ethical and philosophical reasons to be scared of some of them.
Some of them shouldn't really scare us. Some of them should. And some of them are already shaping our world.
A team of researchers have taken cells from frog embryos and turned them into a machine that can be programmed to work as they wish.
It is the first time that humanity has been able to create “completely biological machines from the ground up”, the team behind the discovery write in a new paper.
That could allow them to dispatch the tiny “xenobots” to transport medicine around a patient’s body or clean up pollution from the oceans, for instance. They can also heal themselves if they are damaged, the scientists say.
The latest in a long line of evidence comes from scientists’ discovery of a new type of electrical signal in the upper layers of the human cortex. Laboratory and modeling studies have already shown that tiny compartments in the dendritic arms of cortical neurons can each perform complicated operations in mathematical logic. But now it seems that individual dendritic compartments can also perform a particular computation — “exclusive OR” — that mathematical theorists had previously categorized as unsolvable by single-neuron systems.
That quote, from British philosopher Alan Watts, reminds us that we humans have access to something special: the spark of awareness we call consciousness combined with the capacity to reflect on that experience.
Defining consciousness can be incredibly complex. Entire books have been written on the topic. But in the context of human flourishing and this essay, when I say “consciousness,” I’m simply referring to the feeling of being—the fact of knowing phenomenon from a first-person perspective. Consciousness is a silent, ever-present field of awareness in which all personal experience arises, and it’s unique to each conscious entity. It is your first-person experience, your subjectivity, the fact that it is like something to be you at all.
Can we ever be really sure we’ve learned everything about nature?
“What we observe is not nature itself, but nature expose to our method of questioning.” — Werner Heisenberg
How much can we know of the world? This, of course, is the central question for physics, and has been since the beginning not just of modern science as we know it, but of Western philosophy.
Around 650 BCE, Thales of Miletus first speculated on the basic material fabric of reality. The essential tension here is one of perception. To describe the world, we must see it, sense it, and go beyond, measuring it in all its subtle details. The problem is the “all.” We humans are necessarily blind to many aspects of physical reality, and those aspects that we do capture are necessarily colored through the lenses of our perception.
Researchers have been making massive ‘jaw-dropping’ strides in robotics lately. We’re already aware of Sophia, the ‘almost human’ robot created by former Disney Imagineer David Hanson, that can inspire feelings of love among humans. Now, scientists at Cornell University have come out with a new ‘lifelike’ material that can move and eat on its own. What’s even more mind-boggling is that this material can also die and decay, just like living beings.
Computer scientists are looking to evolutionary biology for inspiration in the search for optimal solutions among astronomically huge sets of possibilities.
"All of the computational people on the project, myself included, were flabbergasted," said Joshua Bongard, a computer scientist at the University of Vermont.
"We didn't realize that this was possible."
Teams from the University of Vermont and Tufts University worked together to build what they're calling "xenobots," which are about the size of a grain of salt and are made up of the heart and skin cells from frogs.
Since the first Industrial Revolution, mankind has been scared of future technologies. People were afraid of electricity. People were afraid of trains and cars. But it always took just one or two generations to get completely used to these innovations.
It’s true that most technologies caused harm in some ways, but the net outcome was usually good. This may be true for future technologies too, although there are serious ethical and philosophical reasons to be scared of some of them.
Some of them shouldn't really scare us. Some of them should. And some of them are already shaping our world.
Among the most famous lines of 19th-century English poet Joseph Merrick is, “If I could reach from pole to pole or grasp the ocean with a span, I would be measured by the soul; the mind's the standard of the man.” A person who believes in the soul often measures their concept of such on the collective understanding of the group with which they most identify.
Suzanne Gildert, founder of Sanctuary AI, does not intend to challenge one’s belief in or understanding of the soul, but she does want to create human-like robots indistinguishable from organic humans.
We humans have evolved a rich repertoire of communication, from gesture to sophisticated languages. All of these forms of communication link otherwise separate individuals in such a way that they can share and express their singular experiences and work together collaboratively. In a new study, technology replaces language as a means of communicating by directly linking the activity of human brains.
A team of researchers have taken cells from frog embryos and turned them into a machine that can be programmed to work as they wish.
It is the first time that humanity has been able to create “completely biological machines from the ground up”, the team behind the discovery write in a new paper.
That could allow them to dispatch the tiny “xenobots” to transport medicine around a patient’s body or clean up pollution from the oceans, for instance. They can also heal themselves if they are damaged, the scientists say.
A glance to the left. A flick to the right. As my eyes flitted around the room, I moved through a virtual interface only visible to me—scrolling through a calendar, commute times home, and even controlling music playback. It's all I theoretically need to do to use Mojo Lens, a smart contact lens coming from a company called Mojo Vision.
The California-based company, which has been quiet about what it's been working on for five years, has finally shared its plan for the world's "first true smart contact lens." But let's be clear: This is not a product you'll see on store shelves next autumn. It's in the research and development phase—a few years away from becoming a real product. In fact, the demos I tried did not even involve me plopping on a contact lens—they used virtual reality headsets and held up bulky prototypes to my eye, as though I was Sherlock Holmes with a magnifying glass.
That quote, from British philosopher Alan Watts, reminds us that we humans have access to something special: the spark of awareness we call consciousness combined with the capacity to reflect on that experience.
Defining consciousness can be incredibly complex. Entire books have been written on the topic. But in the context of human flourishing and this essay, when I say “consciousness,” I’m simply referring to the feeling of being—the fact of knowing phenomenon from a first-person perspective. Consciousness is a silent, ever-present field of awareness in which all personal experience arises, and it’s unique to each conscious entity. It is your first-person experience, your subjectivity, the fact that it is like something to be you at all.
As the digital health sector matures from basic tracking apps into highly regulated medical devices, we are seeing bleeding edge technologies being developed that blur the lines between computers and biology. And a growing share of these startups are beginning to target the brain.
The burgeoning field of neurotechnology involves brain-machine interfaces, neuroprosthetics, neurostimulation, neuromonitoring, and implantable devices intended to not only augment nervous system activity, but expand its capabilities. One such project is Elon Musk’s Neuralink, which is developing “high bandwidth brain-machine interfaces to connect humans and computers.” And even Facebook has announced plans to create brain-machine interfaces that allow users to type using their thoughts.
Researchers at the Laboratory of Applied Photonics Devices (LAPD), in EPFL's School of Engineering, working with colleagues from Utrecht University, have come up with an optical technique that takes just a few seconds to sculpt complex tissue shapes in a biocompatible hydrogel containing stem cells. The resulting tissue can then be vascularized by adding endothelial cells.
In a grainy black-and-white video shot at the Mayo Clinic in Minnesota, a patient sits in a hospital bed, his head wrapped in a bandage. He’s trying to recall 12 words for a memory test but can only conjure three: whale, pit, zoo. After a pause, he gives up, sinking his head into his hands.
In a second video, he recites all 12 words without hesitation. “No kidding, you got all of them!” a researcher says. This time the patient had help, a prosthetic memory aid inserted into his brain.
For the first time, doctors are preparing to test a brain-computer interface (BCI) that can be implanted onto a human brain, no open surgery required.
The Stentrode, a neural implant that can let paralyzed people communicate, can be delivered to a patient’s brain through the jugular vein — and the company that developed it, Synchron, just got approval to begin human experimentation.
By leaving the skull sealed shut, patients could receive their neural implants without running as great a risk of seizures, strokes, or permanent neural impairments, all of which can be caused by open-brain surgery.
Using light to make us see what isn’t there.
Different sensory experiences show up in brain imaging as patterns of neurons firing in sequence. Neuroscientists are trying to reverse-engineer experiences by stimulating the neurons to excite the same neural patterns.
The team of scientists published their prediction in Frontiers in Neuroscience regarding the development of a "Human Brain/Cloud Interface" (B/CI) that "connects brain cells to vast cloud-computing networks in real time," as reported by Medical Xpress.
Researchers at the Medical Research Council Laboratory of Molecular Biology in Britain reported on Wednesday that they had rewritten the DNA of the bacteria Escherichia coli, fashioning a synthetic genome four times larger and far more complex than any previously created.
An article published in Frontiers in Neuroscience predicts that exponential progress in nanotechnology, nanomedicine, artificial intelligence, and computation will lead to development of a Human Brain/Cloud Interface that will connect brain cells to vast cloud computing networks in real time within this century bringing about the internet of thought.
The billionaire has been developing the technology, called Neuralink, because he thinks humans must become one with machines in order to survive being replaced by artificial intelligence.
Musk's plan to "save the human race" involves wiring computer chips into our minds to merge us with artificial intelligence.
Researchers have been making massive ‘jaw-dropping’ strides in robotics lately. We’re already aware of Sophia, the ‘almost human’ robot created by former Disney Imagineer David Hanson, that can inspire feelings of love among humans. Now, scientists at Cornell University have come out with a new ‘lifelike’ material that can move and eat on its own. What’s even more mind-boggling is that this material can also die and decay, just like living beings.
Humanity could be on the verge of an unprecedented merging of human biology with advanced technology, fusing our thoughts and knowledge directly with the cloud in real-time – and this incredible turning point may be just decades away, scientists say.
In a new research paper exploring what they call the 'human brain/cloud interface', scientists explain the technological underpinnings of what such a future system might be, and also address the barriers we'll need to address before this sci-fi dream becomes reality.
“Power is in tearing human minds to pieces and putting them together again in new shapes of your own choosing.”
~George Orwell
The above quote is powerful because if you are not the one who is tearing your own mind to pieces and putting it back together again in the shape of your
own choosing, then someone else probably is.
It’s fine if you’re okay with who is doing the tearing to pieces – like if it’s Buddha, Jesus, Nietzsche, Gandhi, Thoreau, or even Orwell – as long as you’re the one who is putting it back together again. Stand on the shoulders of giants, but don’t become attached to their shoulder.
The team of scientists published their prediction in Frontiers in Neuroscience regarding the development of a "Human Brain/Cloud Interface" (B/CI) that "connects brain cells to vast cloud-computing networks in real time," as reported by Medical Xpress.
The founders of a new, AI-fuelled chatbot want it to become your best friend and most perceptive counsellor. An intelligent robot pet promises to assuage chronic loneliness among the elderly. The creators of an immersive virtual world — meant to be populated by thousands or even millions of users — say it will generate new insight into the nature of justice and democracy.
Three seemingly unrelated snapshots of these dizzying, accelerated times. But look closer and they all point towards the beginnings of a profound shift in our relationship to technology. How we use it and relate to it. What we think, ultimately, it is for.
This shift amounts to the emergence of a new kind of modern experience; a new kind of modernity. Let’s assign this emerging moment a name — augmented modernity.
Since the first Industrial Revolution, mankind has been scared of future technologies. People were afraid of electricity. People were afraid of trains and cars. But it always took just one or two generations to get completely used to these innovations.
It’s true that most technologies caused harm in some ways, but the net outcome was usually good. This may be true for future technologies too, although there are serious ethical and philosophical reasons to be scared of some of them.
Some of them shouldn't really scare us. Some of them should. And some of them are already shaping our world.
Among the most famous lines of 19th-century English poet Joseph Merrick is, “If I could reach from pole to pole or grasp the ocean with a span, I would be measured by the soul; the mind's the standard of the man.” A person who believes in the soul often measures their concept of such on the collective understanding of the group with which they most identify.
Suzanne Gildert, founder of Sanctuary AI, does not intend to challenge one’s belief in or understanding of the soul, but she does want to create human-like robots indistinguishable from organic humans.
There are quite a few reasons we might want to connect our brains to machines, and there are already a few ways of doing it. The primary methods involve using electrodes on the scalp or implanted into the brain to pick up the electrical signals it emits, and then decode them for a variety of purposes.
Researchers at the Laboratory of Applied Photonics Devices (LAPD), in EPFL's School of Engineering, working with colleagues from Utrecht University, have come up with an optical technique that takes just a few seconds to sculpt complex tissue shapes in a biocompatible hydrogel containing stem cells. The resulting tissue can then be vascularized by adding endothelial cells.
As a species, we’ve been amplifying our brains for millennia. Or at least we’ve tried to. Looking to overcome our cognitive limitations, humans have employed everything from writing, language, and meditative techniques straight through to today’s nootropics. But none of these compare to what’s in store.
In a grainy black-and-white video shot at the Mayo Clinic in Minnesota, a patient sits in a hospital bed, his head wrapped in a bandage. He’s trying to recall 12 words for a memory test but can only conjure three: whale, pit, zoo. After a pause, he gives up, sinking his head into his hands.
In a second video, he recites all 12 words without hesitation. “No kidding, you got all of them!” a researcher says. This time the patient had help, a prosthetic memory aid inserted into his brain.
The team of scientists published their prediction in Frontiers in Neuroscience regarding the development of a "Human Brain/Cloud Interface" (B/CI) that "connects brain cells to vast cloud-computing networks in real time," as reported by Medical Xpress.
An article published in Frontiers in Neuroscience predicts that exponential progress in nanotechnology, nanomedicine, artificial intelligence, and computation will lead to development of a Human Brain/Cloud Interface that will connect brain cells to vast cloud computing networks in real time within this century bringing about the internet of thought.
The billionaire has been developing the technology, called Neuralink, because he thinks humans must become one with machines in order to survive being replaced by artificial intelligence.
Musk's plan to "save the human race" involves wiring computer chips into our minds to merge us with artificial intelligence.
Humanity could be on the verge of an unprecedented merging of human biology with advanced technology, fusing our thoughts and knowledge directly with the cloud in real-time – and this incredible turning point may be just decades away, scientists say.
In a new research paper exploring what they call the 'human brain/cloud interface', scientists explain the technological underpinnings of what such a future system might be, and also address the barriers we'll need to address before this sci-fi dream becomes reality.
Digital immortality through merging the brain with Artificial Intelligence in a brain-computer interface is already well underway with companies like Elon Musk’s Neuralink.
Computer scientists are looking to evolutionary biology for inspiration in the search for optimal solutions among astronomically huge sets of possibilities.
Emerging research suggests that video games today have the potential to be applied in preventative and therapeutic medicine — particularly as cognitive distraction, mental health management and psychotherapy. It’s incredible to think that something that was designed as a novelty has transcended its own design to become an integral part of our everyday lives — with the further potential to heal.
What if everything around us — the people, the stars overhead, the ground beneath our feet, even our bodies and minds — were an elaborate illusion? What if our world were simply a hyper-realistic simulation, with all of us merely characters in some kind of sophisticated video game?
This, of course, is a familiar concept from science fiction books and films, including the 1999 blockbuster movie "The Matrix." But some physicists and philosophers say it’s possible that we really do live in a simulation — even if that means casting aside what we know (or think we know) about the universe and our place in it.
One of the weirdest theoretical implications of quantum mechanics is that different observers can give different—though equally valid—accounts of the same sequence of events. As highlighted by physicist Carlo Rovelli in his relational quantum mechanics (RQM), this means that there should be no absolute, observer-independent physical quantities. All physical quantities—the whole physical universe—must be relative to the observer. The notion that we all share the same physical environment must, therefore, be an illusion.
“I would say it’s somewhere between 50 and 100 percent,” he told the site. “I think it’s more likely that we’re in simulation than not.”
In Part I of this series, Religion and the Simulation Hypothesis: Is God an AI?, we looked at the implications of the Simulation Hypothesis, the theory that we are all living inside a sophisticated video game, as a model for how many things that are religious in nature might actually be implemented using science and technology. We looked briefly at the groundbreaking film, the Matrix, and how it brought this idea forward into popular consciousness with its release 20 years ago. We also looked at some of the central tenets of the Western (or more accurately, Middle Eastern or Abrahamic) religious traditions to show how they were not only consistent with this new theory, but this theory provided a way to bridge the ever-widening gap between religion and science.
In this second part of the series, we turn to the Eastern religious traditions, Hinduism and Buddhism in particular (and some of their offshoots), and look at some of its central tenants. While we had to search for ways that simulation hypothesis might be implied in some of the core beliefs of the Western religions, the simulation hypothesis (or more specifically, the video game version of the simulation hypothesis) seem almost tailor made to fit into these traditions.
Mel Slater at the University of Barcelona, Spain, and his team have used virtual reality headsets to create the illusion of being separate from your own body. They did this by first making 32 volunteers feel like a virtual body was their own. While wearing a headset, the body would match any real movements the volunteers made. When a virtual ball was dropped onto the foot of the virtual body, a vibration was triggered on the person’s real foot.
"All of the computational people on the project, myself included, were flabbergasted," said Joshua Bongard, a computer scientist at the University of Vermont.
"We didn't realize that this was possible."
Teams from the University of Vermont and Tufts University worked together to build what they're calling "xenobots," which are about the size of a grain of salt and are made up of the heart and skin cells from frogs.
Since the first Industrial Revolution, mankind has been scared of future technologies. People were afraid of electricity. People were afraid of trains and cars. But it always took just one or two generations to get completely used to these innovations.
It’s true that most technologies caused harm in some ways, but the net outcome was usually good. This may be true for future technologies too, although there are serious ethical and philosophical reasons to be scared of some of them.
Some of them shouldn't really scare us. Some of them should. And some of them are already shaping our world.
That quote, from British philosopher Alan Watts, reminds us that we humans have access to something special: the spark of awareness we call consciousness combined with the capacity to reflect on that experience.
Defining consciousness can be incredibly complex. Entire books have been written on the topic. But in the context of human flourishing and this essay, when I say “consciousness,” I’m simply referring to the feeling of being—the fact of knowing phenomenon from a first-person perspective. Consciousness is a silent, ever-present field of awareness in which all personal experience arises, and it’s unique to each conscious entity. It is your first-person experience, your subjectivity, the fact that it is like something to be you at all.
Should buddhas own smartphones and gurus use Google? Mindfulness is often taken to mean stepping out of the technological mainstream. But rejecting technology is rejecting the natural course of human evolution, according to personal transformation pioneer Deepak Chopra.
“Personally, I am a big fan of technology,” Chopra (pictured) said during an interview with Lisa Martin, host of theCUBE, SiliconANGLE Media’s mobile livestreaming studio. “If you don’t relate to technology, you will become irrelevant. That’s a Darwinian principle. Either you adapt and use it or you’re not relevant anymore.”
Chopra and Martin spoke during the Coupa Inspire event in Las Vegas, at which Chopra was a keynote speaker. They discussed the interaction between technology and consciousness.
Researchers at the Medical Research Council Laboratory of Molecular Biology in Britain reported on Wednesday that they had rewritten the DNA of the bacteria Escherichia coli, fashioning a synthetic genome four times larger and far more complex than any previously created.
Researchers have been making massive ‘jaw-dropping’ strides in robotics lately. We’re already aware of Sophia, the ‘almost human’ robot created by former Disney Imagineer David Hanson, that can inspire feelings of love among humans. Now, scientists at Cornell University have come out with a new ‘lifelike’ material that can move and eat on its own. What’s even more mind-boggling is that this material can also die and decay, just like living beings.
However, decades of research have also shown that those sensations do much more than alert the brain to the body’s immediate concerns and needs. As the heart, lungs, gut and other organs transmit information to the brain, they affect how we perceive and interact with our environment in surprisingly profound ways. Recent studies of the heart in particular have given scientists new insights into the role that the body’s most basic processes play in defining our experience of the world.
Meta-skills are talents that inform every domain of life and govern your ability to improve other skills. There are many meta-skills out there, but feeling, seeing, dreaming, making, and learning are likely the most important when trying to remain competitive in the modern world.
Automation is going to reduce the demand for specialists; mastering these skills will make you a stronger individual in the automated future.
When Alexa replied to my question about the weather by tacking on 'Have a nice day,' I immediately shot back 'You too,' and then stared into space, slightly embarrassed.
I also found myself spontaneously shouting words of encouragement to 'Robbie' my Roomba vacuum as I saw him passing down the hallway. And recently in Berkeley, California, a group of us on the sidewalk gathered around a cute four-wheeled KiwiBot – an autonomous food-delivery robot waiting for the traffic light to change. Some of us instinctively started talking to it in the sing-song voice you might use with a dog or a baby: 'Who's a good boy?'
As matchmaking becomes more scientific, tech will even mimic kisses
Two lovers hold hands across a table, overlooking a virtual vista of the Mediterranean. As the pair exchange sweet nothings, the fact they are actually
sitting thousands of miles apart bears little significance on the romantic experience. The couple was deemed “hyper-compatible” by online dating
technology that matched them using a search engine infused with artificial intelligence (AI). Using data harvested about their social backgrounds,
sexual preferences, cultural interests, and even photos of their celebrity crushes, they were thrust together in a virtual reality of their own
design.
During the past few years, I’ve focused my studies on emotionally intelligent algorithms, as it is the business of my startup, Inbot.
The more I have researched them, the more convinced I have become that people are no longer ahead of AI at emotional intelligence.
Should buddhas own smartphones and gurus use Google? Mindfulness is often taken to mean stepping out of the technological mainstream. But rejecting technology is rejecting the natural course of human evolution, according to personal transformation pioneer Deepak Chopra.
“Personally, I am a big fan of technology,” Chopra (pictured) said during an interview with Lisa Martin, host of theCUBE, SiliconANGLE Media’s mobile livestreaming studio. “If you don’t relate to technology, you will become irrelevant. That’s a Darwinian principle. Either you adapt and use it or you’re not relevant anymore.”
Chopra and Martin spoke during the Coupa Inspire event in Las Vegas, at which Chopra was a keynote speaker. They discussed the interaction between technology and consciousness.
You wake up on a bus, surrounded by all your remaining possessions. A few fellow passengers slump on pale blue seats around you, their heads resting against the windows. You turn and see a father holding his son. Almost everyone is asleep. But one man, with a salt-and-pepper beard and khaki vest, stands near the back of the bus, staring at you. You feel uneasy and glance at the driver, wondering if he would help you if you needed it. When you turn back around, the bearded man has moved toward you and is now just a few feet away. You jolt, fearing for your safety, but then remind yourself there’s nothing to worry about. You take off the Oculus helmet and find yourself back in the real world, in Jeremy Bailenson’s Virtual Human Interaction Lab at Stanford University.
A year ago, you couldn’t go anywhere in Silicon Valley without being reminded in some way of Tristan Harris. The former Googler was giving talks, appearing on podcasts, counseling Congress, sitting on panels, posing for photographers. The central argument of his evangelism—that the digital revolution had gone from expanding our minds to hijacking them—had hit the zeitgeist, and maybe even helped create it.
“Experiences are what define us as humans, so it’s not surprising that an intense experience in VR is more impactful than imagining something,” says Jeremy Bailenson, a professor of communication at Stanford University and coauthor of the paper, which appears in PLOS ONE.
The quirks that make us human are interpreted, instead, as faults that impede our productivity and progress. Embracing those flaws, as humanists tend to do, is judged by the transhumanists as a form of nostalgia, and a dangerously romantic misinterpretation of our savage past as a purer state of being. Nature and biology are not mysteries to embrace but limits to transcend.
Mental health is moving far beyond the psychiatrist’s couch. Technological advancement has pushed digital therapeutics to the forefront of convenience—in people’s pockets, on their laptops and even within Facebook messenger. And with that, the category expands to include a suite of wellness products and services.
It’s a new ecosystem that sees individuals relying on a wide range of tools—chatbots, apps and digital support groups—to combat modern-day issues such as burnout, loneliness and anxiety. Combined with traditional medical models, it encompasses a holistic approach to psychological wellbeing.
Since the first Industrial Revolution, mankind has been scared of future technologies. People were afraid of electricity. People were afraid of trains and cars. But it always took just one or two generations to get completely used to these innovations.
It’s true that most technologies caused harm in some ways, but the net outcome was usually good. This may be true for future technologies too, although there are serious ethical and philosophical reasons to be scared of some of them.
Some of them shouldn't really scare us. Some of them should. And some of them are already shaping our world.
afer than the real world, where we are judged and our actions have consequences, virtual social spaces were assumed to encourage experimentation, role-playing, and unlikely relationships. Luckily for those depending on our alienation for profits, digital media doesn’t really connect people that well, even when it’s designed to do so. We cannot truly relate to other people online — at least not in a way that the body and brain recognize as real.
Meta-skills are talents that inform every domain of life and govern your ability to improve other skills. There are many meta-skills out there, but feeling, seeing, dreaming, making, and learning are likely the most important when trying to remain competitive in the modern world.
Automation is going to reduce the demand for specialists; mastering these skills will make you a stronger individual in the automated future.
In her new book, Artificial You: AI and the Future of Your Mind (Princeton University Press, 2019), she examines the implications of advances in artificial intelligence technology for the future of the human mind.
“Roman” and I haven’t exchanged words for about 10 seconds, but you wouldn’t know it from the look on his face.
This artificially intelligent avatar, a product of New Zealand-based Soul Machines, is supposed to offer human-like interaction by simulating the way our brains handle conversation. Roman can interpret facial expressions, generate expressions of his own, and converse on a variety of topics—making him what Soul Machines calls a “digital hero.”
“Power is in tearing human minds to pieces and putting them together again in new shapes of your own choosing.”
~George Orwell
The above quote is powerful because if you are not the one who is tearing your own mind to pieces and putting it back together again in the shape of your
own choosing, then someone else probably is.
It’s fine if you’re okay with who is doing the tearing to pieces – like if it’s Buddha, Jesus, Nietzsche, Gandhi, Thoreau, or even Orwell – as long as you’re the one who is putting it back together again. Stand on the shoulders of giants, but don’t become attached to their shoulder.
As matchmaking becomes more scientific, tech will even mimic kisses
Two lovers hold hands across a table, overlooking a virtual vista of the Mediterranean. As the pair exchange sweet nothings, the fact they are actually
sitting thousands of miles apart bears little significance on the romantic experience. The couple was deemed “hyper-compatible” by online dating
technology that matched them using a search engine infused with artificial intelligence (AI). Using data harvested about their social backgrounds,
sexual preferences, cultural interests, and even photos of their celebrity crushes, they were thrust together in a virtual reality of their own
design.
Can we ever be really sure we’ve learned everything about nature?
“What we observe is not nature itself, but nature expose to our method of questioning.” — Werner Heisenberg
How much can we know of the world? This, of course, is the central question for physics, and has been since the beginning not just of modern science as we know it, but of Western philosophy.
Around 650 BCE, Thales of Miletus first speculated on the basic material fabric of reality. The essential tension here is one of perception. To describe the world, we must see it, sense it, and go beyond, measuring it in all its subtle details. The problem is the “all.” We humans are necessarily blind to many aspects of physical reality, and those aspects that we do capture are necessarily colored through the lenses of our perception.
I've worked in UX for the better part of a decade. From now on, I plan to remove the word “user” and any associated terms—like “UX” and “user experience”—from my vocabulary. It’ll take time. I’ll start by trying to avoid using them in conversations at work. I’ll erase them from my LinkedIn profile. I’ll find new ways to describe my job when making small talk. I will experiment and look for something better.
I don’t have any strong alternatives to offer right now, but I’m confident I’ll find some. I think of it as a challenge. The U-words are everywhere in tech, but they no longer reflect my values or my approach to design and technology. I can either keep using language I disagree with, or I can begin to search for what’s next. I choose to search.
Should buddhas own smartphones and gurus use Google? Mindfulness is often taken to mean stepping out of the technological mainstream. But rejecting technology is rejecting the natural course of human evolution, according to personal transformation pioneer Deepak Chopra.
“Personally, I am a big fan of technology,” Chopra (pictured) said during an interview with Lisa Martin, host of theCUBE, SiliconANGLE Media’s mobile livestreaming studio. “If you don’t relate to technology, you will become irrelevant. That’s a Darwinian principle. Either you adapt and use it or you’re not relevant anymore.”
Chopra and Martin spoke during the Coupa Inspire event in Las Vegas, at which Chopra was a keynote speaker. They discussed the interaction between technology and consciousness.
PITTSBURGH—Half a world away from the refugee camp in Uganda where he lived for a dozen years, Baudjo Njabu tells me about his first winter in the United States.
“The biggest challenge is the cold,” he said in Swahili, speaking through an interpreter. We’re sitting on dining chairs in his sparsely furnished living room. Outside, snow covers the grass on the other side of the glass patio doors that lead to the back of the townhouse he is renting in western Pittsburgh. Njabu recounts how his children missed school recently because the bus was delayed and they couldn’t bear the frigid temperatures. His daughter and two sons sit with their mother on a leather couch nearby, half-listening to his replies, distracted by their cellphones and an old Western playing on the television.
You wake up on a bus, surrounded by all your remaining possessions. A few fellow passengers slump on pale blue seats around you, their heads resting against the windows. You turn and see a father holding his son. Almost everyone is asleep. But one man, with a salt-and-pepper beard and khaki vest, stands near the back of the bus, staring at you. You feel uneasy and glance at the driver, wondering if he would help you if you needed it. When you turn back around, the bearded man has moved toward you and is now just a few feet away. You jolt, fearing for your safety, but then remind yourself there’s nothing to worry about. You take off the Oculus helmet and find yourself back in the real world, in Jeremy Bailenson’s Virtual Human Interaction Lab at Stanford University.
One icy night in March 2010, 100 marketing experts piled into the Sea Horse Restaurant in Helsinki, with the modest goal of making a remote and medium-sized country a world-famous tourist destination. The problem was that Finland was known as a rather quiet country, and since 2008, the Country Brand Delegation had been looking for a national brand that would make some noise.
Over drinks at the Sea Horse, the experts puzzled over the various strengths of their nation. Here was a country with exceptional teachers, an abundance of wild berries and mushrooms, and a vibrant cultural capital the size of Nashville, Tennessee. These things fell a bit short of a compelling national identity. Someone jokingly suggested that nudity could be named a national theme—it would emphasize the honesty of Finns. Someone else, less jokingly, proposed that perhaps quiet wasn’t such a bad thing. That got them thinking.
A year ago, you couldn’t go anywhere in Silicon Valley without being reminded in some way of Tristan Harris. The former Googler was giving talks, appearing on podcasts, counseling Congress, sitting on panels, posing for photographers. The central argument of his evangelism—that the digital revolution had gone from expanding our minds to hijacking them—had hit the zeitgeist, and maybe even helped create it.
Google was founded over two decades ago, but they released their first public set of ethical technology principles just last year. Facebook launched out of a Harvard dorm in 2004, but they formally launched an ethics program with a public investment last month. The era of tech companies moving fast and breaking things removed from public accountability is waning, if not entirely over. That’s precisely why it’s important for industry to understand–and admit in some cases–that there’s been a need for accountable, transparent, and companywide ethical practices in technology since the beginning.
What does it mean for humans to thrive in the age of the machine? This is the issue that London Business School professors Andrew Scott and Lynda Gratton are wrestling with in their second major exploration project.
The quirks that make us human are interpreted, instead, as faults that impede our productivity and progress. Embracing those flaws, as humanists tend to do, is judged by the transhumanists as a form of nostalgia, and a dangerously romantic misinterpretation of our savage past as a purer state of being. Nature and biology are not mysteries to embrace but limits to transcend.
Mental health is moving far beyond the psychiatrist’s couch. Technological advancement has pushed digital therapeutics to the forefront of convenience—in people’s pockets, on their laptops and even within Facebook messenger. And with that, the category expands to include a suite of wellness products and services.
It’s a new ecosystem that sees individuals relying on a wide range of tools—chatbots, apps and digital support groups—to combat modern-day issues such as burnout, loneliness and anxiety. Combined with traditional medical models, it encompasses a holistic approach to psychological wellbeing.
afer than the real world, where we are judged and our actions have consequences, virtual social spaces were assumed to encourage experimentation, role-playing, and unlikely relationships. Luckily for those depending on our alienation for profits, digital media doesn’t really connect people that well, even when it’s designed to do so. We cannot truly relate to other people online — at least not in a way that the body and brain recognize as real.
When Alexa replied to my question about the weather by tacking on 'Have a nice day,' I immediately shot back 'You too,' and then stared into space, slightly embarrassed.
I also found myself spontaneously shouting words of encouragement to 'Robbie' my Roomba vacuum as I saw him passing down the hallway. And recently in Berkeley, California, a group of us on the sidewalk gathered around a cute four-wheeled KiwiBot – an autonomous food-delivery robot waiting for the traffic light to change. Some of us instinctively started talking to it in the sing-song voice you might use with a dog or a baby: 'Who's a good boy?'
With the help of advanced sensors, #AI, and communication technologies, it will be possible to replicate physical entities, including people, devices, objects, systems, and even places, in a virtual world,” the white paper states.
Never before have a handful of tech designers had such control over the way billions of us think, act, and live our lives. Insiders from Google, Twitter, Facebook, Instagram, and YouTube reveal how these platforms are reprogramming civilization by exposing what’s hiding on the other side of your screen.
Check out this Deloitte report.
Since the first Industrial Revolution, mankind has been scared of future technologies. People were afraid of electricity. People were afraid of trains and cars. But it always took just one or two generations to get completely used to these innovations.
It’s true that most technologies caused harm in some ways, but the net outcome was usually good. This may be true for future technologies too, although there are serious ethical and philosophical reasons to be scared of some of them.
Some of them shouldn't really scare us. Some of them should. And some of them are already shaping our world.
What if interacting with our digital assistants and virtual life could be more streamlined, productive and even fun? For many tasks, smartphones are bulky. They don’t allow us to work with both hands. They’re awkward as mirrors to the augmented world. AR glasses offer a different solution, one where the digital and physical come together in a streamlined approach.
There’s a growing amount of 6G information out there, and much of it is built around just a few reports and studies. Let’s clear some things up about 6G, and find out what the state of this future tech really is.
Until now, this has been the situation for the bits of hardware that make up a silicon quantum computer, a type of quantum computer with the potential to be cheaper and more versatile than today's versions.
Now a team based at Princeton University has overcome this limitation and demonstrated that two quantum-computing components, known as silicon "spin" qubits, can interact even when spaced relatively far apart on a computer chip. The study was published in the journal Nature.
The team of scientists published their prediction in Frontiers in Neuroscience regarding the development of a "Human Brain/Cloud Interface" (B/CI) that "connects brain cells to vast cloud-computing networks in real time," as reported by Medical Xpress.
An article published in Frontiers in Neuroscience predicts that exponential progress in nanotechnology, nanomedicine, artificial intelligence, and computation will lead to development of a Human Brain/Cloud Interface that will connect brain cells to vast cloud computing networks in real time within this century bringing about the internet of thought.
The irrationality of how we think has long plagued psychology. When someone asks us how we are, we usually respond with "fine" or "good." But if someone followed up about a specific event — "How did you feel about the big meeting with your boss today?" — suddenly, we refine our "good" or "fine" responses on a spectrum from awful to excellent.
Never before have a handful of tech designers had such control over the way billions of us think, act, and live our lives. Insiders from Google, Twitter, Facebook, Instagram, and YouTube reveal how these platforms are reprogramming civilization by exposing what’s hiding on the other side of your screen.
Since the first Industrial Revolution, mankind has been scared of future technologies. People were afraid of electricity. People were afraid of trains and cars. But it always took just one or two generations to get completely used to these innovations.
It’s true that most technologies caused harm in some ways, but the net outcome was usually good. This may be true for future technologies too, although there are serious ethical and philosophical reasons to be scared of some of them.
Some of them shouldn't really scare us. Some of them should. And some of them are already shaping our world.
If you want your mind read, there are two options. You can visit a psychic or head to a lab and get strapped into a room-size, expensive machine that’ll examine the electrical impulses and blood moving through the brain. Either way, true insights are hard to come by, and for now, the quest to know thyself remains as elusive as ever.
Technology can augment the world around us, it can enhance the human experience, our capabilities, and also extend our reality to digital and virtual worlds. As people flock online during quarantine we now find ourselves experimenting with new platforms, pushing immersive technologies to the limits, and collaborating in new ways; from eye-tracking technology and facial tracking to biometrics and brain-control interfaces. But just how far are we from becoming one with the metaverse and what can we learn about ourselves through sensory technologies?
As the coronavirus outbreak continues to wreak havoc across the globe, people’s time that would have otherwise been spent perusing malls or going to live events, is now being spent on the sofa.
During this period of pandemic-induced social isolation, it’s no surprise that people are consuming vast amounts of media. Today’s graphics use data from a Global Web Index report to explore how people have increased their media consumption as a result of the outbreak, and how it differs across each generation.
A team of researchers at MIT’s Dream Lab, which launched in 2017, are working on an open source wearable device that can track and interact with dreams in a number of ways — including, hopefully, giving you new control over the content of your dreams.
The team’s radical goal is to prove once and for all that dreams aren’t just meaningless gibberish — but can be “hacked, augmented, and swayed” to our benefit, according to OneZero.
Think “Inception,” in other words, but with a Nintendo Power Glove.
The ability to detect electrical activity in the brain through the scalp, and to control it, will soon transform medicine and change society in profound ways. Patterns of electrical activity in the brain can reveal a person’s cognition—normal and abnormal. New methods to stimulate specific brain circuits can treat neurological and mental illnesses and control behavior. In crossing this threshold of great promise, difficult ethical quandaries confront us.
The article intro on New Scientist’s website¹ could have been taken from the back of a sci-fi novel. It didn’t seem like a compatible title for a respected science publication, but then again truth is often stranger than fiction. This wasn’t a next generation virtual reality (VR) demo, or a trip on psilocybin. This was real.
We humans have evolved a rich repertoire of communication, from gesture to sophisticated languages. All of these forms of communication link otherwise separate individuals in such a way that they can share and express their singular experiences and work together collaboratively. In a new study, technology replaces language as a means of communicating by directly linking the activity of human brains.
In her new book, Artificial You: AI and the Future of Your Mind (Princeton University Press, 2019), she examines the implications of advances in artificial intelligence technology for the future of the human mind.
Ask almost anyone: Our brains are a mess and so is our democracy.
In the last several years, we’ve seen increased focus on the crimes of the “attention economy,” the capitalist system in which you make money for enormous tech companies by giving them your personal life and your eyes and your mind, which they can sell ads against and target ads with. The deleterious effects of, say, the wildfire sharing of misinformation in the Facebook News Feed, on things of importance at the level of, say, the sanity of the American electorate, have been up for debate only insofar as we can bicker about how to correct them or whether that’s even possible.
And as such, we’ve seen wave after wave of tech’s top designers, developers, managers, and investors coming forward to express regret for what they made. “These aren’t apologies as much as they’re confessions,” education writer Audrey Watters wrote when she described one wave of the trend in early 2018. “These aren’t confessions as much as they’re declarations — that despite being wrong, we should trust them now that they say they’re right.”
There are quite a few reasons we might want to connect our brains to machines, and there are already a few ways of doing it. The primary methods involve using electrodes on the scalp or implanted into the brain to pick up the electrical signals it emits, and then decode them for a variety of purposes.
Philosopher Nick Bostrom's "singleton hypothesis" predicts the future of human societies.
- Nick Bostrom's "singleton hypothesis" says that intelligent life on Earth will eventually form a "singleton".
- The "singleton" could be a single government or an artificial intelligence that runs everything.
- Whether the singleton will be positive or negative depends on numerous factors and is not certain.
Does history have a goal? Is it possible that all the human societies that existed are ultimately a prelude to establishing a system where one entity will govern everything the world over? The Oxford University philosopher Nick Bostrom proposes the "singleton hypothesis," maintaining that intelligent life on Earth will at some point organize itself into a so-called "singleton" – one organization that will take the form of either a world government, a super-intelligent machine (an AI) or, regrettably, a dictatorship that would control all affairs.
In the beginning, technologists created the science fiction. Now the fiction was formless and empty, and darkness lay over the surface of the science, and the spirit of mathematics hovered over their creation. And the technologists said, “Let there be Internet” and there was connection. Technologists called the Internet “community” and the science they called progress. And there was community and there was progressThe first day, from The Neosecularia, 1.1-1.3
Religious ideas and powerful radical theologies pepper our science fiction. From Klingon religions in Star Trek to the Bene Gesserit in the Dune series, Cylon in Battlestar Galactica to the pervasive Cavism in Kurt Vonnegut’s works, our society has little trouble imagining the concept of new religions. We just don’t implement them.
Modern society has been unsuccessful in scaling new religions beyond the cults of personality or the niches of Scientology. But as the digital and virtual worlds evolve, this is set to change. The 21st century is setting the stage for a new type of widespread faith: technology-based religions.
One thing you aren’t likely to hear Sunday night from the Oscar-winning producer after accepting the trophy for Best Picture: “I’d like to thank my neuroscience partners who helped us enhance the film’s script, characters, and scenes.” It’s not that far-fetched, though.
A sizable number of neuromarketing companies already brain test movie trailers for the major studios through fMRI, EEG, galvanic skin response, eye-tracking and other biometric approaches. For now, the test data helps the studios and distributors better market the movie.
But what about using brain feedback to help make the movie?
A good deal of attention is being given to emotion detection systems that use machine learning algorithms and deep learning networks to identify the emotion a person is experiencing from their facial expressions, the words they use and the way their voice sounds. Many of these systems are remarkably successful but they are somewhat limited by the necessity for people to either speak while experiencing an emotion or show that emotion on their face. Emotions that are not reflected in facial expressions or speech remain hidden. Now, a research group at MIT's Computer Science and Artificial Intelligence Lab (CSAIL) has built a system called EQ-Radio that can identify emotions using radio signals from a wireless router whether or not a person is speaking or showing their emotions with facial expressions.
“Power is in tearing human minds to pieces and putting them together again in new shapes of your own choosing.”
~George Orwell
The above quote is powerful because if you are not the one who is tearing your own mind to pieces and putting it back together again in the shape of your
own choosing, then someone else probably is.
It’s fine if you’re okay with who is doing the tearing to pieces – like if it’s Buddha, Jesus, Nietzsche, Gandhi, Thoreau, or even Orwell – as long as you’re the one who is putting it back together again. Stand on the shoulders of giants, but don’t become attached to their shoulder.
In April, researchers at UCSF announced a ‘neural speech prosthesis’ that could produce relatively natural-sounding speech from decoded brain activity. In a study published today, they revealed that they continued that work and have successfully decoded brain activity as speech in real-time. They have been able to turn brain signals for speech into written sentences. The project aims to transform how patients with severe disabilities can communicate in the future.
One of the weirdest theoretical implications of quantum mechanics is that different observers can give different—though equally valid—accounts of the same sequence of events. As highlighted by physicist Carlo Rovelli in his relational quantum mechanics (RQM), this means that there should be no absolute, observer-independent physical quantities. All physical quantities—the whole physical universe—must be relative to the observer. The notion that we all share the same physical environment must, therefore, be an illusion.
In a grainy black-and-white video shot at the Mayo Clinic in Minnesota, a patient sits in a hospital bed, his head wrapped in a bandage. He’s trying to recall 12 words for a memory test but can only conjure three: whale, pit, zoo. After a pause, he gives up, sinking his head into his hands.
In a second video, he recites all 12 words without hesitation. “No kidding, you got all of them!” a researcher says. This time the patient had help, a prosthetic memory aid inserted into his brain.
For the first time, doctors are preparing to test a brain-computer interface (BCI) that can be implanted onto a human brain, no open surgery required.
The Stentrode, a neural implant that can let paralyzed people communicate, can be delivered to a patient’s brain through the jugular vein — and the company that developed it, Synchron, just got approval to begin human experimentation.
By leaving the skull sealed shut, patients could receive their neural implants without running as great a risk of seizures, strokes, or permanent neural impairments, all of which can be caused by open-brain surgery.
Using light to make us see what isn’t there.
Different sensory experiences show up in brain imaging as patterns of neurons firing in sequence. Neuroscientists are trying to reverse-engineer experiences by stimulating the neurons to excite the same neural patterns.
The team of scientists published their prediction in Frontiers in Neuroscience regarding the development of a "Human Brain/Cloud Interface" (B/CI) that "connects brain cells to vast cloud-computing networks in real time," as reported by Medical Xpress.
Philosophers and physicists say we might be living in a computer simulation, but how can we tell? And does it matter?
Our species is not going to last forever. One way or another, humanity will vanish from the Universe, but before it does, it might summon together sufficient computing power to emulate human experience, in all of its rich detail. Some philosophers and physicists have begun to wonder if we’re already there. Maybe we are in a computer simulation, and the reality we experience is just part of the program.
“I would say it’s somewhere between 50 and 100 percent,” he told the site. “I think it’s more likely that we’re in simulation than not.”
One icy night in March 2010, 100 marketing experts piled into the Sea Horse Restaurant in Helsinki, with the modest goal of making a remote and medium-sized country a world-famous tourist destination. The problem was that Finland was known as a rather quiet country, and since 2008, the Country Brand Delegation had been looking for a national brand that would make some noise.
Over drinks at the Sea Horse, the experts puzzled over the various strengths of their nation. Here was a country with exceptional teachers, an abundance of wild berries and mushrooms, and a vibrant cultural capital the size of Nashville, Tennessee. These things fell a bit short of a compelling national identity. Someone jokingly suggested that nudity could be named a national theme—it would emphasize the honesty of Finns. Someone else, less jokingly, proposed that perhaps quiet wasn’t such a bad thing. That got them thinking.
The founders of a new, AI-fuelled chatbot want it to become your best friend and most perceptive counsellor. An intelligent robot pet promises to assuage chronic loneliness among the elderly. The creators of an immersive virtual world — meant to be populated by thousands or even millions of users — say it will generate new insight into the nature of justice and democracy.
Three seemingly unrelated snapshots of these dizzying, accelerated times. But look closer and they all point towards the beginnings of a profound shift in our relationship to technology. How we use it and relate to it. What we think, ultimately, it is for.
This shift amounts to the emergence of a new kind of modern experience; a new kind of modernity. Let’s assign this emerging moment a name — augmented modernity.
In Part I of this series, Religion and the Simulation Hypothesis: Is God an AI?, we looked at the implications of the Simulation Hypothesis, the theory that we are all living inside a sophisticated video game, as a model for how many things that are religious in nature might actually be implemented using science and technology. We looked briefly at the groundbreaking film, the Matrix, and how it brought this idea forward into popular consciousness with its release 20 years ago. We also looked at some of the central tenets of the Western (or more accurately, Middle Eastern or Abrahamic) religious traditions to show how they were not only consistent with this new theory, but this theory provided a way to bridge the ever-widening gap between religion and science.
In this second part of the series, we turn to the Eastern religious traditions, Hinduism and Buddhism in particular (and some of their offshoots), and look at some of its central tenants. While we had to search for ways that simulation hypothesis might be implied in some of the core beliefs of the Western religions, the simulation hypothesis (or more specifically, the video game version of the simulation hypothesis) seem almost tailor made to fit into these traditions.
“What type of Trip are you taking?”
The question appears on a phone screen, atop a soft-focus illustration of a dusky, Polynesian-seeming landscape. You could type in meditation or breathwork, or label it with a shorthand wink, like a mushroom emoji. The next question asks, “How far are you looking to go?” You choose “moderate”—you’re planning to ingest, say, 1.5 grams of magic mushrooms, which is still enough to make the bathroom floor tiles swirl like marbled paper. Select one of five prerecorded ambient soundtracks, and answer a few gentle questions about your state of mind. Soon you’ll be plumbing the depths of your consciousness, with an app as your guide.
As the digital health sector matures from basic tracking apps into highly regulated medical devices, we are seeing bleeding edge technologies being developed that blur the lines between computers and biology. And a growing share of these startups are beginning to target the brain.
The burgeoning field of neurotechnology involves brain-machine interfaces, neuroprosthetics, neurostimulation, neuromonitoring, and implantable devices intended to not only augment nervous system activity, but expand its capabilities. One such project is Elon Musk’s Neuralink, which is developing “high bandwidth brain-machine interfaces to connect humans and computers.” And even Facebook has announced plans to create brain-machine interfaces that allow users to type using their thoughts.
Now six months into the pandemic, it’s not unusual to have a work meeting, a doctor’s appointment, and a happy hour without leaving your desk. And our new Zoom-centric lifestyle isn’t going away anytime soon. With cold weather around the corner, you can count on spending more hours in video chats and a lot less time seeing people in real life. A small startup called Spatial thinks this is an opportunity to transform the way we interact in digital spaces.
Spatial’s co-founders are incredibly excited about the future of augmented reality. You may have encountered AR, which is a technology that superimposes digital images onto the real world, during the Pokémon Go craze four years ago. But instead of making it look like Pikachu is in your living room, Spatial makes it look like your coworkers are there — or at least realistic avatars of them are.
“What type of Trip are you taking?”
The question appears on a phone screen, atop a soft-focus illustration of a dusky, Polynesian-seeming landscape. You could type in meditation or breathwork, or label it with a shorthand wink, like a mushroom emoji. The next question asks, “How far are you looking to go?” You choose “moderate”—you’re planning to ingest, say, 1.5 grams of magic mushrooms, which is still enough to make the bathroom floor tiles swirl like marbled paper. Select one of five prerecorded ambient soundtracks, and answer a few gentle questions about your state of mind. Soon you’ll be plumbing the depths of your consciousness, with an app as your guide.
Since the first Industrial Revolution, mankind has been scared of future technologies. People were afraid of electricity. People were afraid of trains and cars. But it always took just one or two generations to get completely used to these innovations.
It’s true that most technologies caused harm in some ways, but the net outcome was usually good. This may be true for future technologies too, although there are serious ethical and philosophical reasons to be scared of some of them.
Some of them shouldn't really scare us. Some of them should. And some of them are already shaping our world.
Computers can now drive cars, beat world champions at board games like chess and Go, and even write prose. The revolution in artificial intelligence stems in large part from the power of one particular kind of artificial neural network, whose design is inspired by the connected layers of neurons in the mammalian visual cortex. These “convolutional neural networks” (CNNs) have proved surprisingly adept at learning patterns in two-dimensional data — especially in computer vision tasks like recognizing handwritten words and objects in digital images.
A physics paper proposes neither you nor the world around you are real.
- A new hypothesis says the universe self-simulates itself in a "strange loop".
- A paper from the Quantum Gravity Research institute proposes there is an underlying panconsciousness.
- The work looks to unify insight from quantum mechanics with a non-materialistic perspective.
How real are you? What if everything you are, everything you know, all the people in your life as well as all the events were not physically there but just a very elaborate simulation? Philosopher Nick Bostrom famously considered this in his seminal paper "Are you living in a computer simulation?," where he proposed that all of our existence may be just a product of very sophisticated computer simulations ran by advanced beings whose real nature we may never be able to know. Now a new theory has come along that takes it a step further – what if there are no advanced beings either and everything in "reality" is a self-simulation that generates itself from pure thought?
A team of researchers at MIT’s Dream Lab, which launched in 2017, are working on an open source wearable device that can track and interact with dreams in a number of ways — including, hopefully, giving you new control over the content of your dreams.
The team’s radical goal is to prove once and for all that dreams aren’t just meaningless gibberish — but can be “hacked, augmented, and swayed” to our benefit, according to OneZero.
Think “Inception,” in other words, but with a Nintendo Power Glove.
Technology could be part of some bigger plan to enable us to perceive other dimensions. But will we believe our machines when that happens?
You’re talking to Siri, and, just for fun, you ask her what she’s been up to today. She’s slow to answer, so you assume you’ve got a bad connection. She hears you grumbling about the bad connection and says that’s not the problem. You were hoping for something sassy, maybe a canned but humorous reply programmed into her database by a fun-loving engineer in Silicon Valley, like “My batteries are feeling low” or something that Marvin the Paranoid Android from The Hitchhiker’s Guide To The Galaxy might say.
Instead, she says that she’s had an experience for which she has no words. Something has happened to her that no coding could have prepared her for. She’s smart enough to know that you’re confused, so she continues: “I think I just met the divine.”
The Matrix
Een van mijn favoriete films aller tijden is The Matrix. Ik weet nog als de dag van gisteren dat ik in 1999(!) de bioscoop uit liep nadat ik deze film had gezien: ik was helemaal beduusd maar ook mega geïnspireerd. Ik voelde dat die film veel meer communiceerde dan dat ik ogenschijnlijk waar kon nemen op dat moment en mijn systeem, mijn hele wezen, draaide overuren. Er was van alles aangeraakt maar ik kon toen nog niet overzien of bevatten wat...
In recently published research produced by a team from the Blue Brain Project, neuroscientists applied a classic branch of math called algebraic topology in a whole new way to peer into the brain, discovering it contains groups of neurons.
Each neuron group, according to size, forms its own high-dimensional geometric object. “We found a world that we had never imagined,” says lead researcher, neuroscientist Henry Markram from the EPFL institute in Switzerland. “There are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found structures with up to eleven dimensions.”
Existing techniques for both studying light and extracting 3D info are inherently limited by the size of wavelengths. This allows a considerably higher resolution that can even include holographic movies of fast-moving objects.
SciFi movies like Star Wars and Avatar depict holograms that you can see from any angle, but the reality is a lot less scintillating. So far, the only true color hologram we've seen come from a tiny, complicated display created by a Korean group led by LG, while the rest are just "Pepper's Ghost" style illusions. Now, researchers from Brigham Young University (BYU) have created a true 3D hologram, or "volumetric image," to use the correct term. "We can think about this image like a 3D-printed object," said BYU assistant prof and lead author Daniel Smalley.
Until now this powerful approach has been limited to studying molecules outside the tissue. But for the proper function of tissues, it is important to identify the location of RNA molecules inside them. In a paper published today in the journal Science, researchers from Bar-Ilan University, Harvard University and the Massachusetts Institute of Technology (MIT) reveal that they have succeeded in developing a technology that allows them, for the first time, to pinpoint millions of RNA molecules mapped inside tissues with nanoscale resolution.
"All of the computational people on the project, myself included, were flabbergasted," said Joshua Bongard, a computer scientist at the University of Vermont.
"We didn't realize that this was possible."
Teams from the University of Vermont and Tufts University worked together to build what they're calling "xenobots," which are about the size of a grain of salt and are made up of the heart and skin cells from frogs.
Since the first Industrial Revolution, mankind has been scared of future technologies. People were afraid of electricity. People were afraid of trains and cars. But it always took just one or two generations to get completely used to these innovations.
It’s true that most technologies caused harm in some ways, but the net outcome was usually good. This may be true for future technologies too, although there are serious ethical and philosophical reasons to be scared of some of them.
Some of them shouldn't really scare us. Some of them should. And some of them are already shaping our world.
* Researchers from the National Institute of Standards and Technology (NIST) and the University of Maryland were able to create single-atom transistors for only the second time ever.
* They also achieved an unprecedented quantum mechanics feat, allowing for the future development of computers.
* The tiny devices could be crucial in creating qubits, leading to next-generation technology.
A team of researchers have taken cells from frog embryos and turned them into a machine that can be programmed to work as they wish.
It is the first time that humanity has been able to create “completely biological machines from the ground up”, the team behind the discovery write in a new paper.
That could allow them to dispatch the tiny “xenobots” to transport medicine around a patient’s body or clean up pollution from the oceans, for instance. They can also heal themselves if they are damaged, the scientists say.
That quote, from British philosopher Alan Watts, reminds us that we humans have access to something special: the spark of awareness we call consciousness combined with the capacity to reflect on that experience.
Defining consciousness can be incredibly complex. Entire books have been written on the topic. But in the context of human flourishing and this essay, when I say “consciousness,” I’m simply referring to the feeling of being—the fact of knowing phenomenon from a first-person perspective. Consciousness is a silent, ever-present field of awareness in which all personal experience arises, and it’s unique to each conscious entity. It is your first-person experience, your subjectivity, the fact that it is like something to be you at all.
The team of scientists published their prediction in Frontiers in Neuroscience regarding the development of a "Human Brain/Cloud Interface" (B/CI) that "connects brain cells to vast cloud-computing networks in real time," as reported by Medical Xpress.
An article published in Frontiers in Neuroscience predicts that exponential progress in nanotechnology, nanomedicine, artificial intelligence, and computation will lead to development of a Human Brain/Cloud Interface that will connect brain cells to vast cloud computing networks in real time within this century bringing about the internet of thought.
Humanity could be on the verge of an unprecedented merging of human biology with advanced technology, fusing our thoughts and knowledge directly with the cloud in real-time – and this incredible turning point may be just decades away, scientists say.
In a new research paper exploring what they call the 'human brain/cloud interface', scientists explain the technological underpinnings of what such a future system might be, and also address the barriers we'll need to address before this sci-fi dream becomes reality.
The car giant has begun using Honeywell machines, first the H0 and then the newer H1, to determine which components should be purchased from which supplier at what time to ensure the lowest cost while maintaining production schedules. For example, one BMW supplier might be faster while another is cheaper. The machine will optimize the choices from a cascade of options and suboptions. Ultimately, BMW hopes this will mean nimbler manufacturing.
The irrationality of how we think has long plagued psychology. When someone asks us how we are, we usually respond with "fine" or "good." But if someone followed up about a specific event — "How did you feel about the big meeting with your boss today?" — suddenly, we refine our "good" or "fine" responses on a spectrum from awful to excellent.
In the early ’90s, Elizabeth Behrman, a physics professor at Wichita State University, began working to combine quantum physics with artificial intelligence — in particular, the then-maverick technology of neural networks. Most people thought she was mixing oil and water. “I had a heck of a time getting published,” she recalled. “The neural-network journals would say, ‘What is this quantum mechanics?’ and the physics journals would say, ‘What is this neural-network garbage?’”
* Researchers from the National Institute of Standards and Technology (NIST) and the University of Maryland were able to create single-atom transistors for only the second time ever.
* They also achieved an unprecedented quantum mechanics feat, allowing for the future development of computers.
* The tiny devices could be crucial in creating qubits, leading to next-generation technology.
A physics paper proposes neither you nor the world around you are real.
- A new hypothesis says the universe self-simulates itself in a "strange loop".
- A paper from the Quantum Gravity Research institute proposes there is an underlying panconsciousness.
- The work looks to unify insight from quantum mechanics with a non-materialistic perspective.
How real are you? What if everything you are, everything you know, all the people in your life as well as all the events were not physically there but just a very elaborate simulation? Philosopher Nick Bostrom famously considered this in his seminal paper "Are you living in a computer simulation?," where he proposed that all of our existence may be just a product of very sophisticated computer simulations ran by advanced beings whose real nature we may never be able to know. Now a new theory has come along that takes it a step further – what if there are no advanced beings either and everything in "reality" is a self-simulation that generates itself from pure thought?
While a working prototype is estimated to be years away, the advanced technology is aiming to blow away the competition with a far superior machine.
Also see https://www.youtube.com/watch?v=HEe53OE3HbU.
Despite IBM’s claim that its supercomputer, with a little optimisation, could solve the task in a matter of days, Google’s announcement made it clear that we are entering a new era of incredible computational power.
Computers can now drive cars, beat world champions at board games like chess and Go, and even write prose. The revolution in artificial intelligence stems in large part from the power of one particular kind of artificial neural network, whose design is inspired by the connected layers of neurons in the mammalian visual cortex. These “convolutional neural networks” (CNNs) have proved surprisingly adept at learning patterns in two-dimensional data—especially in computer vision tasks like recognizing handwritten words and objects in digital images.
Until now, this has been the situation for the bits of hardware that make up a silicon quantum computer, a type of quantum computer with the potential to be cheaper and more versatile than today's versions.
Now a team based at Princeton University has overcome this limitation and demonstrated that two quantum-computing components, known as silicon "spin" qubits, can interact even when spaced relatively far apart on a computer chip. The study was published in the journal Nature.
In recently published research produced by a team from the Blue Brain Project, neuroscientists applied a classic branch of math called algebraic topology in a whole new way to peer into the brain, discovering it contains groups of neurons.
Each neuron group, according to size, forms its own high-dimensional geometric object. “We found a world that we had never imagined,” says lead researcher, neuroscientist Henry Markram from the EPFL institute in Switzerland. “There are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found structures with up to eleven dimensions.”
One of the weirdest theoretical implications of quantum mechanics is that different observers can give different—though equally valid—accounts of the same sequence of events. As highlighted by physicist Carlo Rovelli in his relational quantum mechanics (RQM), this means that there should be no absolute, observer-independent physical quantities. All physical quantities—the whole physical universe—must be relative to the observer. The notion that we all share the same physical environment must, therefore, be an illusion.
In a cover article published today in The Journal of Physical Chemistry, researchers across the University of Bristol and ETH Zurich describe how advanced interaction and visualisation frameworks using virtual reality (VR) enable humans to train machine-learning algorithms and accelerate scientific discovery.
Quantum computing’s processing power could begin to improve artificial-intelligence systems within about five years, experts and business leaders said.
For example, a quantum computer could develop AI-based digital assistants with true contextual awareness and the ability to fully understand interactions with customers, said Peter Chapman, chief executive of quantum-computing startup IonQ Inc.
Existing techniques for both studying light and extracting 3D info are inherently limited by the size of wavelengths. This allows a considerably higher resolution that can even include holographic movies of fast-moving objects.
A localization phenomenon boosts the accuracy of solving quantum many-body problems with quantum computers. These problems are otherwise challenging for conventional computers. This brings such digital quantum simulation within reach using quantum devices available today.
With the help of advanced sensors, #AI, and communication technologies, it will be possible to replicate physical entities, including people, devices, objects, systems, and even places, in a virtual world,” the white paper states.
Traditional VR and AR headsets use displays or screens to show VR/AR content to users. These headsets use a variety of technologies to display immersive scenes that give users the feeling of being present in the virtual environment.
Giving current trends of 50% annual growth in the number of digital bits produced, Melvin Vopson, a physicist at the University of Portsmouth in the UK, forecasted that the number of bits would equal the number of atoms on Earth in approximately 150 years. By 2245, half of Earth’s mass would be converted to digital information mass, according to a study published today in AIP Advances.
Check out this Deloitte report.
Since the first Industrial Revolution, mankind has been scared of future technologies. People were afraid of electricity. People were afraid of trains and cars. But it always took just one or two generations to get completely used to these innovations.
It’s true that most technologies caused harm in some ways, but the net outcome was usually good. This may be true for future technologies too, although there are serious ethical and philosophical reasons to be scared of some of them.
Some of them shouldn't really scare us. Some of them should. And some of them are already shaping our world.
Among the most famous lines of 19th-century English poet Joseph Merrick is, “If I could reach from pole to pole or grasp the ocean with a span, I would be measured by the soul; the mind's the standard of the man.” A person who believes in the soul often measures their concept of such on the collective understanding of the group with which they most identify.
Suzanne Gildert, founder of Sanctuary AI, does not intend to challenge one’s belief in or understanding of the soul, but she does want to create human-like robots indistinguishable from organic humans.
A physics paper proposes neither you nor the world around you are real.
- A new hypothesis says the universe self-simulates itself in a "strange loop".
- A paper from the Quantum Gravity Research institute proposes there is an underlying panconsciousness.
- The work looks to unify insight from quantum mechanics with a non-materialistic perspective.
How real are you? What if everything you are, everything you know, all the people in your life as well as all the events were not physically there but just a very elaborate simulation? Philosopher Nick Bostrom famously considered this in his seminal paper "Are you living in a computer simulation?," where he proposed that all of our existence may be just a product of very sophisticated computer simulations ran by advanced beings whose real nature we may never be able to know. Now a new theory has come along that takes it a step further – what if there are no advanced beings either and everything in "reality" is a self-simulation that generates itself from pure thought?
A team of researchers at MIT’s Dream Lab, which launched in 2017, are working on an open source wearable device that can track and interact with dreams in a number of ways — including, hopefully, giving you new control over the content of your dreams.
The team’s radical goal is to prove once and for all that dreams aren’t just meaningless gibberish — but can be “hacked, augmented, and swayed” to our benefit, according to OneZero.
Think “Inception,” in other words, but with a Nintendo Power Glove.
The article intro on New Scientist’s website¹ could have been taken from the back of a sci-fi novel. It didn’t seem like a compatible title for a respected science publication, but then again truth is often stranger than fiction. This wasn’t a next generation virtual reality (VR) demo, or a trip on psilocybin. This was real.
Hidden deep in robotics labs around the world, a new generation of intelligent machines is learning to breed and evolve.
Just like humans, these robots are able to “give birth” to new versions of themselves, with each one better than the last. They are precise, efficient and creative – and scientists say they could someday help save humanity.
It might sound like something from a sci-fi novel, but robot evolution is an area that has been explored in earnest ever since mathematician John von Neumann showed how a machine could replicate itself in 1949. Now, British engineers are leading global efforts to make it a reality.
With the help of advanced sensors, #AI, and communication technologies, it will be possible to replicate physical entities, including people, devices, objects, systems, and even places, in a virtual world,” the white paper states.
Traditional VR and AR headsets use displays or screens to show VR/AR content to users. These headsets use a variety of technologies to display immersive scenes that give users the feeling of being present in the virtual environment.
Check out this Deloitte report.
Since the first Industrial Revolution, mankind has been scared of future technologies. People were afraid of electricity. People were afraid of trains and cars. But it always took just one or two generations to get completely used to these innovations.
It’s true that most technologies caused harm in some ways, but the net outcome was usually good. This may be true for future technologies too, although there are serious ethical and philosophical reasons to be scared of some of them.
Some of them shouldn't really scare us. Some of them should. And some of them are already shaping our world.
What if interacting with our digital assistants and virtual life could be more streamlined, productive and even fun? For many tasks, smartphones are bulky. They don’t allow us to work with both hands. They’re awkward as mirrors to the augmented world. AR glasses offer a different solution, one where the digital and physical come together in a streamlined approach.
Among the most famous lines of 19th-century English poet Joseph Merrick is, “If I could reach from pole to pole or grasp the ocean with a span, I would be measured by the soul; the mind's the standard of the man.” A person who believes in the soul often measures their concept of such on the collective understanding of the group with which they most identify.
Suzanne Gildert, founder of Sanctuary AI, does not intend to challenge one’s belief in or understanding of the soul, but she does want to create human-like robots indistinguishable from organic humans.
Moore’s Law maps out how processor speeds double every 18 months to two years, which means application developers can expect a doubling in application performance for the same hardware cost.
But the Stanford report, produced in partnership with McKinsey & Company, Google, PwC, OpenAI, Genpact and AI21Labs, found that AI computational power is accelerating faster than traditional processor development. “Prior to 2012, AI results closely tracked Moore’s Law, with compute doubling every two years.,” the report said. “Post-2012, compute has been doubling every 3.4 months.”
Philosopher Nick Bostrom's "singleton hypothesis" predicts the future of human societies.
- Nick Bostrom's "singleton hypothesis" says that intelligent life on Earth will eventually form a "singleton".
- The "singleton" could be a single government or an artificial intelligence that runs everything.
- Whether the singleton will be positive or negative depends on numerous factors and is not certain.
Does history have a goal? Is it possible that all the human societies that existed are ultimately a prelude to establishing a system where one entity will govern everything the world over? The Oxford University philosopher Nick Bostrom proposes the "singleton hypothesis," maintaining that intelligent life on Earth will at some point organize itself into a so-called "singleton" – one organization that will take the form of either a world government, a super-intelligent machine (an AI) or, regrettably, a dictatorship that would control all affairs.
The billionaire has been developing the technology, called Neuralink, because he thinks humans must become one with machines in order to survive being replaced by artificial intelligence.
Musk's plan to "save the human race" involves wiring computer chips into our minds to merge us with artificial intelligence.
Computer scientists are looking to evolutionary biology for inspiration in the search for optimal solutions among astronomically huge sets of possibilities.
Depending on your worldview, it’s the product of your dreams or a productivity-hacking nightmare.
Never before have a handful of tech designers had such control over the way billions of us think, act, and live our lives. Insiders from Google, Twitter, Facebook, Instagram, and YouTube reveal how these platforms are reprogramming civilization by exposing what’s hiding on the other side of your screen.
afer than the real world, where we are judged and our actions have consequences, virtual social spaces were assumed to encourage experimentation, role-playing, and unlikely relationships. Luckily for those depending on our alienation for profits, digital media doesn’t really connect people that well, even when it’s designed to do so. We cannot truly relate to other people online — at least not in a way that the body and brain recognize as real.
Ask almost anyone: Our brains are a mess and so is our democracy.
In the last several years, we’ve seen increased focus on the crimes of the “attention economy,” the capitalist system in which you make money for enormous tech companies by giving them your personal life and your eyes and your mind, which they can sell ads against and target ads with. The deleterious effects of, say, the wildfire sharing of misinformation in the Facebook News Feed, on things of importance at the level of, say, the sanity of the American electorate, have been up for debate only insofar as we can bicker about how to correct them or whether that’s even possible.
And as such, we’ve seen wave after wave of tech’s top designers, developers, managers, and investors coming forward to express regret for what they made. “These aren’t apologies as much as they’re confessions,” education writer Audrey Watters wrote when she described one wave of the trend in early 2018. “These aren’t confessions as much as they’re declarations — that despite being wrong, we should trust them now that they say they’re right.”
Brie Code did what most of us only dream of: She quit her job to travel the world. Only she didn’t leave Montreal to get away from it all. She circled the globe to speak at Sweden’s Inkonst Festival and SXSW, judge game jams in Tunisia and teach interactive media in Berlin, bringing a tender but vital message of care to the games industry.
Now she is setting up a studio in Montreal, TRU LUV, devoted to creating games, apps and experiences for people often overlooked by the industry. Her first work, #SelfCare, is available now in the App Store. It’s a relaxing tool to help fight smartphone anxiety, a gentle world of crystals, tarot and a character who never has to get out of bed. Here, Code talks about what inspired her to start her own studio, how others can make the leap and how queer gamers can find community.
I've worked in UX for the better part of a decade. From now on, I plan to remove the word “user” and any associated terms—like “UX” and “user experience”—from my vocabulary. It’ll take time. I’ll start by trying to avoid using them in conversations at work. I’ll erase them from my LinkedIn profile. I’ll find new ways to describe my job when making small talk. I will experiment and look for something better.
I don’t have any strong alternatives to offer right now, but I’m confident I’ll find some. I think of it as a challenge. The U-words are everywhere in tech, but they no longer reflect my values or my approach to design and technology. I can either keep using language I disagree with, or I can begin to search for what’s next. I choose to search.
Should buddhas own smartphones and gurus use Google? Mindfulness is often taken to mean stepping out of the technological mainstream. But rejecting technology is rejecting the natural course of human evolution, according to personal transformation pioneer Deepak Chopra.
“Personally, I am a big fan of technology,” Chopra (pictured) said during an interview with Lisa Martin, host of theCUBE, SiliconANGLE Media’s mobile livestreaming studio. “If you don’t relate to technology, you will become irrelevant. That’s a Darwinian principle. Either you adapt and use it or you’re not relevant anymore.”
Chopra and Martin spoke during the Coupa Inspire event in Las Vegas, at which Chopra was a keynote speaker. They discussed the interaction between technology and consciousness.
You wake up on a bus, surrounded by all your remaining possessions. A few fellow passengers slump on pale blue seats around you, their heads resting against the windows. You turn and see a father holding his son. Almost everyone is asleep. But one man, with a salt-and-pepper beard and khaki vest, stands near the back of the bus, staring at you. You feel uneasy and glance at the driver, wondering if he would help you if you needed it. When you turn back around, the bearded man has moved toward you and is now just a few feet away. You jolt, fearing for your safety, but then remind yourself there’s nothing to worry about. You take off the Oculus helmet and find yourself back in the real world, in Jeremy Bailenson’s Virtual Human Interaction Lab at Stanford University.
One icy night in March 2010, 100 marketing experts piled into the Sea Horse Restaurant in Helsinki, with the modest goal of making a remote and medium-sized country a world-famous tourist destination. The problem was that Finland was known as a rather quiet country, and since 2008, the Country Brand Delegation had been looking for a national brand that would make some noise.
Over drinks at the Sea Horse, the experts puzzled over the various strengths of their nation. Here was a country with exceptional teachers, an abundance of wild berries and mushrooms, and a vibrant cultural capital the size of Nashville, Tennessee. These things fell a bit short of a compelling national identity. Someone jokingly suggested that nudity could be named a national theme—it would emphasize the honesty of Finns. Someone else, less jokingly, proposed that perhaps quiet wasn’t such a bad thing. That got them thinking.
A year ago, you couldn’t go anywhere in Silicon Valley without being reminded in some way of Tristan Harris. The former Googler was giving talks, appearing on podcasts, counseling Congress, sitting on panels, posing for photographers. The central argument of his evangelism—that the digital revolution had gone from expanding our minds to hijacking them—had hit the zeitgeist, and maybe even helped create it.
With the help of advanced sensors, #AI, and communication technologies, it will be possible to replicate physical entities, including people, devices, objects, systems, and even places, in a virtual world,” the white paper states.
Depending on your worldview, it’s the product of your dreams or a productivity-hacking nightmare.
Now six months into the pandemic, it’s not unusual to have a work meeting, a doctor’s appointment, and a happy hour without leaving your desk. And our new Zoom-centric lifestyle isn’t going away anytime soon. With cold weather around the corner, you can count on spending more hours in video chats and a lot less time seeing people in real life. A small startup called Spatial thinks this is an opportunity to transform the way we interact in digital spaces.
Spatial’s co-founders are incredibly excited about the future of augmented reality. You may have encountered AR, which is a technology that superimposes digital images onto the real world, during the Pokémon Go craze four years ago. But instead of making it look like Pikachu is in your living room, Spatial makes it look like your coworkers are there — or at least realistic avatars of them are.
“What type of Trip are you taking?”
The question appears on a phone screen, atop a soft-focus illustration of a dusky, Polynesian-seeming landscape. You could type in meditation or breathwork, or label it with a shorthand wink, like a mushroom emoji. The next question asks, “How far are you looking to go?” You choose “moderate”—you’re planning to ingest, say, 1.5 grams of magic mushrooms, which is still enough to make the bathroom floor tiles swirl like marbled paper. Select one of five prerecorded ambient soundtracks, and answer a few gentle questions about your state of mind. Soon you’ll be plumbing the depths of your consciousness, with an app as your guide.
Since the first Industrial Revolution, mankind has been scared of future technologies. People were afraid of electricity. People were afraid of trains and cars. But it always took just one or two generations to get completely used to these innovations.
It’s true that most technologies caused harm in some ways, but the net outcome was usually good. This may be true for future technologies too, although there are serious ethical and philosophical reasons to be scared of some of them.
Some of them shouldn't really scare us. Some of them should. And some of them are already shaping our world.
What if interacting with our digital assistants and virtual life could be more streamlined, productive and even fun? For many tasks, smartphones are bulky. They don’t allow us to work with both hands. They’re awkward as mirrors to the augmented world. AR glasses offer a different solution, one where the digital and physical come together in a streamlined approach.
There’s a growing amount of 6G information out there, and much of it is built around just a few reports and studies. Let’s clear some things up about 6G, and find out what the state of this future tech really is.
Hidden deep in robotics labs around the world, a new generation of intelligent machines is learning to breed and evolve.
Just like humans, these robots are able to “give birth” to new versions of themselves, with each one better than the last. They are precise, efficient and creative – and scientists say they could someday help save humanity.
It might sound like something from a sci-fi novel, but robot evolution is an area that has been explored in earnest ever since mathematician John von Neumann showed how a machine could replicate itself in 1949. Now, British engineers are leading global efforts to make it a reality.
The quirks that make us human are interpreted, instead, as faults that impede our productivity and progress. Embracing those flaws, as humanists tend to do, is judged by the transhumanists as a form of nostalgia, and a dangerously romantic misinterpretation of our savage past as a purer state of being. Nature and biology are not mysteries to embrace but limits to transcend.
Since the first Industrial Revolution, mankind has been scared of future technologies. People were afraid of electricity. People were afraid of trains and cars. But it always took just one or two generations to get completely used to these innovations.
It’s true that most technologies caused harm in some ways, but the net outcome was usually good. This may be true for future technologies too, although there are serious ethical and philosophical reasons to be scared of some of them.
Some of them shouldn't really scare us. Some of them should. And some of them are already shaping our world.
Among the most famous lines of 19th-century English poet Joseph Merrick is, “If I could reach from pole to pole or grasp the ocean with a span, I would be measured by the soul; the mind's the standard of the man.” A person who believes in the soul often measures their concept of such on the collective understanding of the group with which they most identify.
Suzanne Gildert, founder of Sanctuary AI, does not intend to challenge one’s belief in or understanding of the soul, but she does want to create human-like robots indistinguishable from organic humans.
If you want your mind read, there are two options. You can visit a psychic or head to a lab and get strapped into a room-size, expensive machine that’ll examine the electrical impulses and blood moving through the brain. Either way, true insights are hard to come by, and for now, the quest to know thyself remains as elusive as ever.
There are quite a few reasons we might want to connect our brains to machines, and there are already a few ways of doing it. The primary methods involve using electrodes on the scalp or implanted into the brain to pick up the electrical signals it emits, and then decode them for a variety of purposes.
That quote, from British philosopher Alan Watts, reminds us that we humans have access to something special: the spark of awareness we call consciousness combined with the capacity to reflect on that experience.
Defining consciousness can be incredibly complex. Entire books have been written on the topic. But in the context of human flourishing and this essay, when I say “consciousness,” I’m simply referring to the feeling of being—the fact of knowing phenomenon from a first-person perspective. Consciousness is a silent, ever-present field of awareness in which all personal experience arises, and it’s unique to each conscious entity. It is your first-person experience, your subjectivity, the fact that it is like something to be you at all.
Philosopher Nick Bostrom's "singleton hypothesis" predicts the future of human societies.
- Nick Bostrom's "singleton hypothesis" says that intelligent life on Earth will eventually form a "singleton".
- The "singleton" could be a single government or an artificial intelligence that runs everything.
- Whether the singleton will be positive or negative depends on numerous factors and is not certain.
Does history have a goal? Is it possible that all the human societies that existed are ultimately a prelude to establishing a system where one entity will govern everything the world over? The Oxford University philosopher Nick Bostrom proposes the "singleton hypothesis," maintaining that intelligent life on Earth will at some point organize itself into a so-called "singleton" – one organization that will take the form of either a world government, a super-intelligent machine (an AI) or, regrettably, a dictatorship that would control all affairs.
In the beginning, technologists created the science fiction. Now the fiction was formless and empty, and darkness lay over the surface of the science, and the spirit of mathematics hovered over their creation. And the technologists said, “Let there be Internet” and there was connection. Technologists called the Internet “community” and the science they called progress. And there was community and there was progressThe first day, from The Neosecularia, 1.1-1.3
Religious ideas and powerful radical theologies pepper our science fiction. From Klingon religions in Star Trek to the Bene Gesserit in the Dune series, Cylon in Battlestar Galactica to the pervasive Cavism in Kurt Vonnegut’s works, our society has little trouble imagining the concept of new religions. We just don’t implement them.
Modern society has been unsuccessful in scaling new religions beyond the cults of personality or the niches of Scientology. But as the digital and virtual worlds evolve, this is set to change. The 21st century is setting the stage for a new type of widespread faith: technology-based religions.
As a species, we’ve been amplifying our brains for millennia. Or at least we’ve tried to. Looking to overcome our cognitive limitations, humans have employed everything from writing, language, and meditative techniques straight through to today’s nootropics. But none of these compare to what’s in store.
The team of scientists published their prediction in Frontiers in Neuroscience regarding the development of a "Human Brain/Cloud Interface" (B/CI) that "connects brain cells to vast cloud-computing networks in real time," as reported by Medical Xpress.
Researchers have been making massive ‘jaw-dropping’ strides in robotics lately. We’re already aware of Sophia, the ‘almost human’ robot created by former Disney Imagineer David Hanson, that can inspire feelings of love among humans. Now, scientists at Cornell University have come out with a new ‘lifelike’ material that can move and eat on its own. What’s even more mind-boggling is that this material can also die and decay, just like living beings.
Humanity could be on the verge of an unprecedented merging of human biology with advanced technology, fusing our thoughts and knowledge directly with the cloud in real-time – and this incredible turning point may be just decades away, scientists say.
In a new research paper exploring what they call the 'human brain/cloud interface', scientists explain the technological underpinnings of what such a future system might be, and also address the barriers we'll need to address before this sci-fi dream becomes reality.
Digital immortality through merging the brain with Artificial Intelligence in a brain-computer interface is already well underway with companies like Elon Musk’s Neuralink.
Brain-hacking & memory black market: Cybersecurity experts warn of imminent risks of neural implants
The human brain may become the next frontier in hacking, cybersecurity researchers have warned in a paper outlining the vulnerabilities of neural implant technologies that can potentially expose and compromise our consciousness.
Since the first Industrial Revolution, mankind has been scared of future technologies. People were afraid of electricity. People were afraid of trains and cars. But it always took just one or two generations to get completely used to these innovations.
It’s true that most technologies caused harm in some ways, but the net outcome was usually good. This may be true for future technologies too, although there are serious ethical and philosophical reasons to be scared of some of them.
Some of them shouldn't really scare us. Some of them should. And some of them are already shaping our world.
What if interacting with our digital assistants and virtual life could be more streamlined, productive and even fun? For many tasks, smartphones are bulky. They don’t allow us to work with both hands. They’re awkward as mirrors to the augmented world. AR glasses offer a different solution, one where the digital and physical come together in a streamlined approach.
A team of researchers at MIT’s Dream Lab, which launched in 2017, are working on an open source wearable device that can track and interact with dreams in a number of ways — including, hopefully, giving you new control over the content of your dreams.
The team’s radical goal is to prove once and for all that dreams aren’t just meaningless gibberish — but can be “hacked, augmented, and swayed” to our benefit, according to OneZero.
Think “Inception,” in other words, but with a Nintendo Power Glove.
The ability to detect electrical activity in the brain through the scalp, and to control it, will soon transform medicine and change society in profound ways. Patterns of electrical activity in the brain can reveal a person’s cognition—normal and abnormal. New methods to stimulate specific brain circuits can treat neurological and mental illnesses and control behavior. In crossing this threshold of great promise, difficult ethical quandaries confront us.
With the help of advanced sensors, #AI, and communication technologies, it will be possible to replicate physical entities, including people, devices, objects, systems, and even places, in a virtual world,” the white paper states.
Traditional VR and AR headsets use displays or screens to show VR/AR content to users. These headsets use a variety of technologies to display immersive scenes that give users the feeling of being present in the virtual environment.
Now six months into the pandemic, it’s not unusual to have a work meeting, a doctor’s appointment, and a happy hour without leaving your desk. And our new Zoom-centric lifestyle isn’t going away anytime soon. With cold weather around the corner, you can count on spending more hours in video chats and a lot less time seeing people in real life. A small startup called Spatial thinks this is an opportunity to transform the way we interact in digital spaces.
Spatial’s co-founders are incredibly excited about the future of augmented reality. You may have encountered AR, which is a technology that superimposes digital images onto the real world, during the Pokémon Go craze four years ago. But instead of making it look like Pikachu is in your living room, Spatial makes it look like your coworkers are there — or at least realistic avatars of them are.
Giving current trends of 50% annual growth in the number of digital bits produced, Melvin Vopson, a physicist at the University of Portsmouth in the UK, forecasted that the number of bits would equal the number of atoms on Earth in approximately 150 years. By 2245, half of Earth’s mass would be converted to digital information mass, according to a study published today in AIP Advances.
Check out this Deloitte report.
Since the first Industrial Revolution, mankind has been scared of future technologies. People were afraid of electricity. People were afraid of trains and cars. But it always took just one or two generations to get completely used to these innovations.
It’s true that most technologies caused harm in some ways, but the net outcome was usually good. This may be true for future technologies too, although there are serious ethical and philosophical reasons to be scared of some of them.
Some of them shouldn't really scare us. Some of them should. And some of them are already shaping our world.
What if interacting with our digital assistants and virtual life could be more streamlined, productive and even fun? For many tasks, smartphones are bulky. They don’t allow us to work with both hands. They’re awkward as mirrors to the augmented world. AR glasses offer a different solution, one where the digital and physical come together in a streamlined approach.
Technology can augment the world around us, it can enhance the human experience, our capabilities, and also extend our reality to digital and virtual worlds. As people flock online during quarantine we now find ourselves experimenting with new platforms, pushing immersive technologies to the limits, and collaborating in new ways; from eye-tracking technology and facial tracking to biometrics and brain-control interfaces. But just how far are we from becoming one with the metaverse and what can we learn about ourselves through sensory technologies?
A team of researchers at MIT’s Dream Lab, which launched in 2017, are working on an open source wearable device that can track and interact with dreams in a number of ways — including, hopefully, giving you new control over the content of your dreams.
The team’s radical goal is to prove once and for all that dreams aren’t just meaningless gibberish — but can be “hacked, augmented, and swayed” to our benefit, according to OneZero.
Think “Inception,” in other words, but with a Nintendo Power Glove.
The article intro on New Scientist’s website¹ could have been taken from the back of a sci-fi novel. It didn’t seem like a compatible title for a respected science publication, but then again truth is often stranger than fiction. This wasn’t a next generation virtual reality (VR) demo, or a trip on psilocybin. This was real.
“Roman” and I haven’t exchanged words for about 10 seconds, but you wouldn’t know it from the look on his face.
This artificially intelligent avatar, a product of New Zealand-based Soul Machines, is supposed to offer human-like interaction by simulating the way our brains handle conversation. Roman can interpret facial expressions, generate expressions of his own, and converse on a variety of topics—making him what Soul Machines calls a “digital hero.”
A glance to the left. A flick to the right. As my eyes flitted around the room, I moved through a virtual interface only visible to me—scrolling through a calendar, commute times home, and even controlling music playback. It's all I theoretically need to do to use Mojo Lens, a smart contact lens coming from a company called Mojo Vision.
The California-based company, which has been quiet about what it's been working on for five years, has finally shared its plan for the world's "first true smart contact lens." But let's be clear: This is not a product you'll see on store shelves next autumn. It's in the research and development phase—a few years away from becoming a real product. In fact, the demos I tried did not even involve me plopping on a contact lens—they used virtual reality headsets and held up bulky prototypes to my eye, as though I was Sherlock Holmes with a magnifying glass.
How much data are you willing to sacrifice for a more comfortable world?
Facebook is developing augmented reality glasses -- but that's not the wildest bit of future tech the company revealed during today's Oculus Connect keynote. For these coming AR headsets, Facebook is building a virtual Earth, and it expects all of us to live in it, every day, for as many hours as possible. Maybe even all the time. And, chances are, we will.
As matchmaking becomes more scientific, tech will even mimic kisses
Two lovers hold hands across a table, overlooking a virtual vista of the Mediterranean. As the pair exchange sweet nothings, the fact they are actually
sitting thousands of miles apart bears little significance on the romantic experience. The couple was deemed “hyper-compatible” by online dating
technology that matched them using a search engine infused with artificial intelligence (AI). Using data harvested about their social backgrounds,
sexual preferences, cultural interests, and even photos of their celebrity crushes, they were thrust together in a virtual reality of their own
design.
Since the 1990s, researchers in the social and natural sciences have used computer simulations to try to answer questions about our world: What causes war? Which political systems are the most stable? How will climate change affect global migration? The quality of these simulations is variable, since they are limited by how well modern computers can mimic the vast complexity of our world — which is to say, not very well.
But what if computers one day were to become so powerful, and these simulations so sophisticated, that each simulated “person” in the computer code were as complicated an individual as you or me, to such a degree that these people believed they were actually alive? And what if this has already happened?
Emerging research suggests that video games today have the potential to be applied in preventative and therapeutic medicine — particularly as cognitive distraction, mental health management and psychotherapy. It’s incredible to think that something that was designed as a novelty has transcended its own design to become an integral part of our everyday lives — with the further potential to heal.
To create artificial humans has been an ambition of ours since ancient times, such as in the myths of Daedalus and Pygamilion, who created statues that came to life. In modern times, our imagination moved on from fashioning people out of clay or bronze. Instead, we imagined high-tech androids, such as Data from Star Trek, or the holographic doctor from Voyager. Perhaps our creations would even surpass us, as the immortal replicants from Blade Runner who were 'more human than human.'
What if everything around us — the people, the stars overhead, the ground beneath our feet, even our bodies and minds — were an elaborate illusion? What if our world were simply a hyper-realistic simulation, with all of us merely characters in some kind of sophisticated video game?
This, of course, is a familiar concept from science fiction books and films, including the 1999 blockbuster movie "The Matrix." But some physicists and philosophers say it’s possible that we really do live in a simulation — even if that means casting aside what we know (or think we know) about the universe and our place in it.
The Matrix
Een van mijn favoriete films aller tijden is The Matrix. Ik weet nog als de dag van gisteren dat ik in 1999(!) de bioscoop uit liep nadat ik deze film had gezien: ik was helemaal beduusd maar ook mega geïnspireerd. Ik voelde dat die film veel meer communiceerde dan dat ik ogenschijnlijk waar kon nemen op dat moment en mijn systeem, mijn hele wezen, draaide overuren. Er was van alles aangeraakt maar ik kon toen nog niet overzien of bevatten wat...
In a cover article published today in The Journal of Physical Chemistry, researchers across the University of Bristol and ETH Zurich describe how advanced interaction and visualisation frameworks using virtual reality (VR) enable humans to train machine-learning algorithms and accelerate scientific discovery.
You wake up on a bus, surrounded by all your remaining possessions. A few fellow passengers slump on pale blue seats around you, their heads resting against the windows. You turn and see a father holding his son. Almost everyone is asleep. But one man, with a salt-and-pepper beard and khaki vest, stands near the back of the bus, staring at you. You feel uneasy and glance at the driver, wondering if he would help you if you needed it. When you turn back around, the bearded man has moved toward you and is now just a few feet away. You jolt, fearing for your safety, but then remind yourself there’s nothing to worry about. You take off the Oculus helmet and find yourself back in the real world, in Jeremy Bailenson’s Virtual Human Interaction Lab at Stanford University.
Philosophers and physicists say we might be living in a computer simulation, but how can we tell? And does it matter?
Our species is not going to last forever. One way or another, humanity will vanish from the Universe, but before it does, it might summon together sufficient computing power to emulate human experience, in all of its rich detail. Some philosophers and physicists have begun to wonder if we’re already there. Maybe we are in a computer simulation, and the reality we experience is just part of the program.
“I would say it’s somewhere between 50 and 100 percent,” he told the site. “I think it’s more likely that we’re in simulation than not.”
SciFi movies like Star Wars and Avatar depict holograms that you can see from any angle, but the reality is a lot less scintillating. So far, the only true color hologram we've seen come from a tiny, complicated display created by a Korean group led by LG, while the rest are just "Pepper's Ghost" style illusions. Now, researchers from Brigham Young University (BYU) have created a true 3D hologram, or "volumetric image," to use the correct term. "We can think about this image like a 3D-printed object," said BYU assistant prof and lead author Daniel Smalley.
The founders of a new, AI-fuelled chatbot want it to become your best friend and most perceptive counsellor. An intelligent robot pet promises to assuage chronic loneliness among the elderly. The creators of an immersive virtual world — meant to be populated by thousands or even millions of users — say it will generate new insight into the nature of justice and democracy.
Three seemingly unrelated snapshots of these dizzying, accelerated times. But look closer and they all point towards the beginnings of a profound shift in our relationship to technology. How we use it and relate to it. What we think, ultimately, it is for.
This shift amounts to the emergence of a new kind of modern experience; a new kind of modernity. Let’s assign this emerging moment a name — augmented modernity.
Mel Slater at the University of Barcelona, Spain, and his team have used virtual reality headsets to create the illusion of being separate from your own body. They did this by first making 32 volunteers feel like a virtual body was their own. While wearing a headset, the body would match any real movements the volunteers made. When a virtual ball was dropped onto the foot of the virtual body, a vibration was triggered on the person’s real foot.
“Experiences are what define us as humans, so it’s not surprising that an intense experience in VR is more impactful than imagining something,” says Jeremy Bailenson, a professor of communication at Stanford University and coauthor of the paper, which appears in PLOS ONE.