Here ComesBMW's Futuristic Motorcycle With Balances on Its Own

 Here ComesBMW's Futuristic Motorcycle With Balances on Its Own

The motorcycle of the future is so smart that it could eliminate the need for protective gear, according to automaker BMW.
To mark its 100th birthday, BMW has unveiled a number of concept vehicles that imagine the future of transportation. Possibly its most daring revelation, the so-called Motorrad Vision Next 100 concept motorcycle is so advanced that BMW claims riders wouldn't need a helmet.
The Motorrad Vision Next 100 would have a self-balancing system that keeps the bike upright both in motion and when still. BMW touted the motorbike's futuristic features, saying it would allow for riders of all skill levels to "enjoy the sensation of absolute freedom." According to the automaker, the Motorrad wouldn't require protective gear such as helmets and padded suits

Another traditional feature was also missing from the concept: a control panel. Instead, helmetless riders would wear a visor that acts as a smart display.
"Information is exchanged between rider and bike largely via the smart visor," BMW said in a statement. "This spans the rider's entire field of view and provides not only wind protection but also relevant information, which it projects straight into the line of sight as and when it is needed."
Such information would not be needed all the time because drivers will be able to hand over active control of the vehicle at points; the Motorrad and other Vision Next 100 vehicles would be equipped with self-driving technology, according to BMW.

The futuristic motorcycle and other concepts released during the centennial event were noted as "zero emissions" vehicles, because BMW said it believes the future of transportation is electric. 
Other concepts in the Next 100 Years series included a massive Rolls-Royce (measuring nearly 20 feet long) that is referred to as "her" because of the vehicles' AI called Eleanor. Eleanor is fully autonomous, with a couch instead of seats and no steering wheel. BMW also unveiled a Mini concept that is partially transparent and designed completely around car-sharing. No need to own this future Mini, because BMW said the vehicle can be called to a location with an app, arriving autonomously, and ready for use.

Read More

TESLA CARS NOW HAVE THE HARDWARE NECESSARY TO DRIVE THEMSELVES New Technology

TESLA CARS  NOW HAVE THE HARDWARE NECESSARY TO DRIVE THEMSELVES New Technology

Tesla announced today, in a blog post on its website, that all of its vehicles -- the Model S, the Model X, and the forthcoming Model 3 -- will have the hardware in place to allow them to be fully autonomous in the future. The vehicles will have eight cameras with 360-degree vision up to 250 meters (about 275 yards). They will also be equipped with 12 ultrasonic sensors that detect "both hard and soft objects," (obstructions like cars and human bodies) at twice the distance of the current Autopilot as well as forward-facing radar that can detect traffic and events through fog, rain, dust, and even the car in front of you.
All of this information requires a huge amount of processing power to make sense of the world. Tesla is using a new onboard computer that's 40 times more powerful than the previous generation. This sensing and processing will come at a price: The current Autopilot costs about $3,000, company head Elon Musk said in a question and answer session after the announcement, but the self-driving system costs a hefty $8,000.


Though, Tesla learned its lesson about releasing powerful new software into the wild where drivers might not use it as intended. According to the company's blog post:
We will further calibrate the system using millions of miles of real-world driving to ensure significant improvements to safety and convenience. While this is occurring, Teslas with new hardware will temporarily lack certain features currently available on Teslas with first-generation Autopilot hardware, including some standard safety features such as automatic emergency breaking, collision warning, lane holding and active cruise control.
But Musk elaborated in the Q&A, saying that it wouldn't make sense to turn off features that are preventing accidents and increasing safety. The company will update even the oldest autopilot systems over the air as further testing of the self-driving system yields improvements.
So these vehicles won't be self-driving from day one, but they will be SAE Level 5 fully autonomous, without need of human input, very soon. "The hardware is capable of the highest level of autonomy," Musk said. Adding this hardware now achieves one of his goals in the Tesla Master Plan Part Deux, released in July: "All Tesla vehicles will have the hardware necessary to be fully self-driving with fail-operational capability, meaning that any given system in the car could break and your car will still drive itself safely."


Read More

Here are You Ready for a New Computer That Reads Your Mind?

Here are You Ready for a New Computer That Reads Your Mind?


This article was originally published at The Conversation. The publication contributed the article to Live Science's Expert Voices: Op-Ed & Insights.
The first computers cost millions of dollars and were locked inside rooms equipped with special electrical circuits and air conditioning. The only people who could use them had been trained to write programs in that specific computer's language. Today, gesture-based interactions, using multitouch pads and touchscreens, and exploration of virtual 3D spaces allow us to interact with digital devices in ways very similar to how we interact with physical objects.
This newly immersive world not only is open to more people to experience; it also allows almost anyone to exercise their own creativity and innovative tendencies. No longer are these capabilities dependent on being a math whiz or a coding expert: Mozilla's "A-Frame" is making the task of building complex virtual reality models much easier for programmers. And Google's "Tilt Brush" software allows people to build and edit 3D worlds without any programming skills at all.
My own research hopes to develop the next phase of human-computer interaction. We are monitoring people's brain activity in real time and recognizing specific thoughts (of "tree" versus "dog" or of a particular pizza topping). It will be yet another step in the historical progression that has brought technology to the masses – and will widen its use even more in the coming years.
From those early computers dependent on machine-specific programming languages, the first major improvement allowing more people to use computers was the development of the Fortran programming language. It expanded the range of programmers to scientists and engineers who were comfortable with mathematical expressions. This was the era of punch cards, when programs were written by punching holes in cardstock, and output had no graphics – only keyboard characters.
By the late 1960s mechanical plotters let programmers draw simple pictures by telling a computer to raise or lower a pen, and move it a certain distance horizontally or vertically on a piece of paper. The commands and graphics were simple, but even drawing a basic curve required understanding trigonometry, to specify the very small intervals of horizontal and vertical lines that would look like a curve once finished.
The 1980s introduced what has become the familiar windows, icons and mouse interface. That gave nonprogrammers a much easier time creating images – so much so that many comic strip authors and artists stopped drawing in ink and began working with computer tabletsAnimated films went digital, as programmers developed sophisticated proprietary tools for use by animators.
Simpler tools became commercially available for consumers. In the early 1990s the OpenGL library allowed programmers to build 2D and 3D digital models and add color, movement and interaction to these models.
In recent years, 3D displays have become much smaller and cheaper than the multi-million-dollar CAVE and similar immersive systems of the 1990s. They needed space 30 feet wide, 30 feet long and 20 feet high to fit their rear-projection systems. Now smartphone holders can provide a personal 3D display for less than US$100.
User interfaces have gotten similarly more powerful. Multitouch pads and touchscreens recognize movements of multiple fingers on a surface, while devices such as the Wii and Kinect recognize movements of arms and legs. A company called Fove has been working to develop a VR headset that will track users' eyes, and which will, among other capabilities, let people make eye contact with virtual characters.
My own research is helping to move us toward what might be called "computing at the speed of thought." Low-cost open-source projects such as OpenBCI allow people to assemble their own neuroheadsets that capture brain activity noninvasively.
Ten to 15 years from now, hardware/software systems using those sorts of neuroheadsets could assist me by recognizing the nouns I've thought about in the past few minutes. If it replayed the topics of my recent thoughts, I could retrace my steps and remember what thought triggered my most recent thought.
With more sophistication, perhaps a writer could wear an inexpensive neuroheadset, imagine characters, an environment and their interactions. The computer could deliver the first draft of a short story, either as a text file or even as a video file showing the scenes and dialogue generated in the writer's mind.
Once human thought can communicate directly with computers, a new world will open before us. One day, I would like to play games in a virtual world that incorporates social dynamics as in the experimental games"Prom Week" and "Façade" and in the commercial game "Blood & Laurels."
This type of experience would not be limited to game play. Software platforms such as an enhanced Versu could enable me to write those kinds of games, developing characters in the same virtual environments they'll inhabit.
Years ago, I envisioned an easily modifiable application that allows me to have stacks of virtual papers hovering around me that I can easily grab and rifle through to find a reference I need for a project. I would love that. I would also really enjoy playing "Quidditch" with other people while we all experience the sensation of flying via head-mounted displays and control our brooms by tilting and twisting our bodies.
Once low-cost motion capture becomes available, I envision new forms of digital story-telling. Imagine a group of friends acting out a story, then matching their bodies and their captured movements to 3D avatars to reenact the tale in a synthetic world. They could use multiple virtual cameras to "film" the action from multiple perspectives, and then construct a video.
This sort of creativity could lead to much more complex projects, all conceived in creators' minds and made into virtual experiences. Amateur historians without programming skills may one day be able to construct augmented reality systems in which they can superimpose onto views of the real world selected images from historic photos or digital models of buildings that no longer exist. Eventually they could add avatars with whom users can converse. As technology continues to progress and become easier to use, the dioramas built of cardboard, modeling clay and twigs by children 50 years ago could one day become explorable, life-sized virtual spaces.
Read More