Here ComesBMW's Futuristic Motorcycle With Balances on Its Own

 Here ComesBMW's Futuristic Motorcycle With Balances on Its Own

The motorcycle of the future is so smart that it could eliminate the need for protective gear, according to automaker BMW.
To mark its 100th birthday, BMW has unveiled a number of concept vehicles that imagine the future of transportation. Possibly its most daring revelation, the so-called Motorrad Vision Next 100 concept motorcycle is so advanced that BMW claims riders wouldn't need a helmet.
The Motorrad Vision Next 100 would have a self-balancing system that keeps the bike upright both in motion and when still. BMW touted the motorbike's futuristic features, saying it would allow for riders of all skill levels to "enjoy the sensation of absolute freedom." According to the automaker, the Motorrad wouldn't require protective gear such as helmets and padded suits

Another traditional feature was also missing from the concept: a control panel. Instead, helmetless riders would wear a visor that acts as a smart display.
"Information is exchanged between rider and bike largely via the smart visor," BMW said in a statement. "This spans the rider's entire field of view and provides not only wind protection but also relevant information, which it projects straight into the line of sight as and when it is needed."
Such information would not be needed all the time because drivers will be able to hand over active control of the vehicle at points; the Motorrad and other Vision Next 100 vehicles would be equipped with self-driving technology, according to BMW.

The futuristic motorcycle and other concepts released during the centennial event were noted as "zero emissions" vehicles, because BMW said it believes the future of transportation is electric. 
Other concepts in the Next 100 Years series included a massive Rolls-Royce (measuring nearly 20 feet long) that is referred to as "her" because of the vehicles' AI called Eleanor. Eleanor is fully autonomous, with a couch instead of seats and no steering wheel. BMW also unveiled a Mini concept that is partially transparent and designed completely around car-sharing. No need to own this future Mini, because BMW said the vehicle can be called to a location with an app, arriving autonomously, and ready for use.

Read More

TESLA CARS NOW HAVE THE HARDWARE NECESSARY TO DRIVE THEMSELVES New Technology

TESLA CARS  NOW HAVE THE HARDWARE NECESSARY TO DRIVE THEMSELVES New Technology

Tesla announced today, in a blog post on its website, that all of its vehicles -- the Model S, the Model X, and the forthcoming Model 3 -- will have the hardware in place to allow them to be fully autonomous in the future. The vehicles will have eight cameras with 360-degree vision up to 250 meters (about 275 yards). They will also be equipped with 12 ultrasonic sensors that detect "both hard and soft objects," (obstructions like cars and human bodies) at twice the distance of the current Autopilot as well as forward-facing radar that can detect traffic and events through fog, rain, dust, and even the car in front of you.
All of this information requires a huge amount of processing power to make sense of the world. Tesla is using a new onboard computer that's 40 times more powerful than the previous generation. This sensing and processing will come at a price: The current Autopilot costs about $3,000, company head Elon Musk said in a question and answer session after the announcement, but the self-driving system costs a hefty $8,000.


Though, Tesla learned its lesson about releasing powerful new software into the wild where drivers might not use it as intended. According to the company's blog post:
We will further calibrate the system using millions of miles of real-world driving to ensure significant improvements to safety and convenience. While this is occurring, Teslas with new hardware will temporarily lack certain features currently available on Teslas with first-generation Autopilot hardware, including some standard safety features such as automatic emergency breaking, collision warning, lane holding and active cruise control.
But Musk elaborated in the Q&A, saying that it wouldn't make sense to turn off features that are preventing accidents and increasing safety. The company will update even the oldest autopilot systems over the air as further testing of the self-driving system yields improvements.
So these vehicles won't be self-driving from day one, but they will be SAE Level 5 fully autonomous, without need of human input, very soon. "The hardware is capable of the highest level of autonomy," Musk said. Adding this hardware now achieves one of his goals in the Tesla Master Plan Part Deux, released in July: "All Tesla vehicles will have the hardware necessary to be fully self-driving with fail-operational capability, meaning that any given system in the car could break and your car will still drive itself safely."


Read More

Here are You Ready for a New Computer That Reads Your Mind?

Here are You Ready for a New Computer That Reads Your Mind?


This article was originally published at The Conversation. The publication contributed the article to Live Science's Expert Voices: Op-Ed & Insights.
The first computers cost millions of dollars and were locked inside rooms equipped with special electrical circuits and air conditioning. The only people who could use them had been trained to write programs in that specific computer's language. Today, gesture-based interactions, using multitouch pads and touchscreens, and exploration of virtual 3D spaces allow us to interact with digital devices in ways very similar to how we interact with physical objects.
This newly immersive world not only is open to more people to experience; it also allows almost anyone to exercise their own creativity and innovative tendencies. No longer are these capabilities dependent on being a math whiz or a coding expert: Mozilla's "A-Frame" is making the task of building complex virtual reality models much easier for programmers. And Google's "Tilt Brush" software allows people to build and edit 3D worlds without any programming skills at all.
My own research hopes to develop the next phase of human-computer interaction. We are monitoring people's brain activity in real time and recognizing specific thoughts (of "tree" versus "dog" or of a particular pizza topping). It will be yet another step in the historical progression that has brought technology to the masses – and will widen its use even more in the coming years.
From those early computers dependent on machine-specific programming languages, the first major improvement allowing more people to use computers was the development of the Fortran programming language. It expanded the range of programmers to scientists and engineers who were comfortable with mathematical expressions. This was the era of punch cards, when programs were written by punching holes in cardstock, and output had no graphics – only keyboard characters.
By the late 1960s mechanical plotters let programmers draw simple pictures by telling a computer to raise or lower a pen, and move it a certain distance horizontally or vertically on a piece of paper. The commands and graphics were simple, but even drawing a basic curve required understanding trigonometry, to specify the very small intervals of horizontal and vertical lines that would look like a curve once finished.
The 1980s introduced what has become the familiar windows, icons and mouse interface. That gave nonprogrammers a much easier time creating images – so much so that many comic strip authors and artists stopped drawing in ink and began working with computer tabletsAnimated films went digital, as programmers developed sophisticated proprietary tools for use by animators.
Simpler tools became commercially available for consumers. In the early 1990s the OpenGL library allowed programmers to build 2D and 3D digital models and add color, movement and interaction to these models.
In recent years, 3D displays have become much smaller and cheaper than the multi-million-dollar CAVE and similar immersive systems of the 1990s. They needed space 30 feet wide, 30 feet long and 20 feet high to fit their rear-projection systems. Now smartphone holders can provide a personal 3D display for less than US$100.
User interfaces have gotten similarly more powerful. Multitouch pads and touchscreens recognize movements of multiple fingers on a surface, while devices such as the Wii and Kinect recognize movements of arms and legs. A company called Fove has been working to develop a VR headset that will track users' eyes, and which will, among other capabilities, let people make eye contact with virtual characters.
My own research is helping to move us toward what might be called "computing at the speed of thought." Low-cost open-source projects such as OpenBCI allow people to assemble their own neuroheadsets that capture brain activity noninvasively.
Ten to 15 years from now, hardware/software systems using those sorts of neuroheadsets could assist me by recognizing the nouns I've thought about in the past few minutes. If it replayed the topics of my recent thoughts, I could retrace my steps and remember what thought triggered my most recent thought.
With more sophistication, perhaps a writer could wear an inexpensive neuroheadset, imagine characters, an environment and their interactions. The computer could deliver the first draft of a short story, either as a text file or even as a video file showing the scenes and dialogue generated in the writer's mind.
Once human thought can communicate directly with computers, a new world will open before us. One day, I would like to play games in a virtual world that incorporates social dynamics as in the experimental games"Prom Week" and "Façade" and in the commercial game "Blood & Laurels."
This type of experience would not be limited to game play. Software platforms such as an enhanced Versu could enable me to write those kinds of games, developing characters in the same virtual environments they'll inhabit.
Years ago, I envisioned an easily modifiable application that allows me to have stacks of virtual papers hovering around me that I can easily grab and rifle through to find a reference I need for a project. I would love that. I would also really enjoy playing "Quidditch" with other people while we all experience the sensation of flying via head-mounted displays and control our brooms by tilting and twisting our bodies.
Once low-cost motion capture becomes available, I envision new forms of digital story-telling. Imagine a group of friends acting out a story, then matching their bodies and their captured movements to 3D avatars to reenact the tale in a synthetic world. They could use multiple virtual cameras to "film" the action from multiple perspectives, and then construct a video.
This sort of creativity could lead to much more complex projects, all conceived in creators' minds and made into virtual experiences. Amateur historians without programming skills may one day be able to construct augmented reality systems in which they can superimpose onto views of the real world selected images from historic photos or digital models of buildings that no longer exist. Eventually they could add avatars with whom users can converse. As technology continues to progress and become easier to use, the dioramas built of cardboard, modeling clay and twigs by children 50 years ago could one day become explorable, life-sized virtual spaces.
Read More

Samsung Galaxy S8: Release date, headphone jack, camera, code names and more

Samsung Galaxy S8: Release date, headphone jack, camera, code names and more

The Samsung Galaxy S8 is likely still almost six months away, so that means that everyone is talking about it now.
And with details about the upcoming phones making the rounds, why wouldn't they be? Already, information regarding screen size, the headphone jack (or lack thereof) and release date are out there for all to see, so we've rounded it all up here. 

1. According to reports...the Galaxy S8 will sport a 5.5-inch AMOLED screen with 4K display (806 ppi pixel density) and feature an iris scanner...the device will feature dual-rear camera -- a 12MP S5K2L2 sensor and a 13MP sensor made by Sony...and an 8MP front-snapper

2. There are also speculations...that Samsung's next-generation smartphone will come in two variants -- one with a 5.1-inch screen and another with a 5.5-inch screen. Both...are expected to have edge screens

3. Are the headphone jack wars upon us? A...report suggests that Samsung is "actively and aggressively" looking into the development of its own proprietary headphone jack...It is suggested the new headphone jack won’t be USB-C

4. Sources say it will...be incompatible with the iPhone...Samsung is supposedly of the hopes that...manufacturers will get behind this new no-Apple-allowed proprietary jack and give it preference over Lightning headphones...leaving iPhone users with older, less interesting audio products.

5. It appears as though the internal codenames for the two versions of the Galaxy S8 are simply Dream, and Dream2...These are the codenames, but the model numbers are said to be SM-G950 and SM-G955.

6. It has been anticipated that the smartphone will be launched on the 26th of February, 2017...Samsung usually announces their gen-next S series smartphones one day before...Mobile World Congress...It looks like the Samsung Galaxy S8 too, will follow the same trend.

7. It is likely...the next handset will have a similar price to its predecessor, meaning...the smaller version could be priced somewhere between $649 and $699...The larger version will likely sell for $749 to $799.


Read More

Apple Now fixes 'bricking' update flaw

Apple Now fixes 'bricking' update flaw

Apple says it has fixed a problem that was “bricking” people’s devices while updating to the latest operating system.
Complaints from iPhone and iPad users updating to iOS 10 flooded social media after the software was rolled out on Tuesday.
Discussion around the issue was trending on social media - but Apple said it was limited to a “small number of users”.
Bricking is a term used to describe devices that have been rendered unusable due to a software or hardware fault - as in, the device is now as useful to you as a brick.
The firm apologised to affected customers.
"We experienced a brief issue with the software update process, affecting a small number of users during the first hour of availability,” an Apple spokeswoman said in an emailed statement.
"The problem was quickly resolved and we apologise to those customers.
"Anyone who was affected should connect to iTunes to complete the update or contact AppleCare for help."

The roll out of iOS 10 comes a week before the iPhone 7 goes on sale. In the mean time, existing owners of Apple devices vented their frustration at the problem.
"Currently sitting here with a bricked iPhone full of photos with a recent family visit,” wrote Courtney Guertin on Twitter.
Teething
It is not the first time Apple has had teething problems in rolling out major updates.
When users tried to update to iOS 5 in 2011, high demand appeared to be behind users getting multiple error messages when trying to update.
More recently, in February this year, Apple faced criticism after an update started bricking devices if they had been repaired by a company other than Apple.
Apple apologised for the problem and issued a software update to fix the issue.
It said Error 53, as it became known, was in fact security measure designed to make sure the fingerprint sensor on the device had not been tampered with.

Read More