Tuesday, October 25, 2016

Microsoft Event: Come for Windows, Stay for Surface

2:47 PM

Microsoft's Windows 10 event, scheduled for Wednesday, actually could focus more on hardware than on the operating system, given that the next Windows 10 refresh is expected in March.

A new Surface device -- possibly an all-in-one computer with a 21-inch or larger screen -- could be in the offing.

Whether Microsoft will unveil updates to its Surface Pro 4 and Surface Book devices or showcase products from its OEMs has generated some debate.

Windows Insiders have been testing new Windows 10 features, including trackpad innovations, noted The Verge. It might announce a F.lux-like feature to reduce blue light in Windows 10, as well as a new HomeHub smart device control feature. Further, Microsoft might bring its Holographic shell to Windows 10 PCs.

What Makes Sense

"It'll be a hardware event," predicted Rob Enderle, principal analyst at the Enderle Group.

"This is the expected refresh of the Surface product line," he told TechNewsWorld, because "all that Surface stuff belongs to the Windows 10 group."

Although some of the speculation may be groundless, "the all-in-one device makes a certain amount of sense because Microsoft hasn't had a desktop Surface product yet," Enderle pointed out.

"The smart money's on the fact that they'll probably have a Surface all-in-one, and the Surface Book and Surface Pro will probably be upgraded," he said. "It's about time."

Improvements in battery life, higher-resolution screens, better touch technology, and "a better overall stylus experience" probably will be unveiled, Enderle suggested. "Everybody has improved their stylus resolution and screens have been getting better."

However, don't expect the Surface Book or Surface Pro to get any thinner, because "they're already pretty thin and will run into thermal limits," he noted.

The Surface all-in-one PC "is what's most likely to be announced," R "Ray" Wang, principal analyst at Constellation Research, also said.

Expect deeper integration with Cortana services, Microsoft's Power BI and more, he told TechNewsWorld..

Moving into AR, VR and Games

Microsoft also might push virtual or augmented reality, Wang suggested. "Look for the battle for VR and AR to continue. With the rumors of the iPhone 8 integrating VR and AR, this is a chance to pre-empt Apple."

Microsoft might make "some type of announcement to counter Nintendo's Switch with their devices," he noted, "but we're not sure if this will happen."

The Nintendo Switch is a new home gaming system unveiled last week. It can be used in single player and multiplayer modes, and it lets gamers play the same title wherever, whenever and with whomever they choose.

Marketing Works

Redstone 2, the Windows 10 update scheduled for March, will have several new features, according to Wang, including an Office hub, better Bluetooth GATT support, onDemand sync with Microsoft OneDrive, interoperability among devices, and gaming services to the devices.

Windows 10 had a 22 percent share of the global operating systems market in September, according to Netmarketshare. Windows 7 continued to dominate with 48 percent.

Microsoft reported that revenue from Surface products grew 9 percent year over year in constant currency in fiscal Q4 2016, driven by sales of the Surface Pro 4 and Surface Book.

Sales totaled US$965 million, but Microsoft didn't state how many units were sold.

"Microsoft has been marketing the Surface hard, and, once they moved from ARM-based products to Intel Core products, they did well," Enderle said. "It shows that when you market something hard, well, it sells."


Floppy Candidate is The Washington Post’s new political game that’s got us in a flap

7:02 AM

You choose a political candidate, then begin flapping your way through the 2016 electoral calendar. With unlockable characters, tons of trivia, and a decidedly tongue-in-cheek humour, this is a really smashing spin on a familiar style of game.

Point your browser at The Washington Post for more info and links to download the game for free.

imrs

Source : gamethenews[dot]net

Monday, October 24, 2016

OPINION Why Large Companies Can't Innovate

4:46 PM

One of the things that has made Dell World very different is that at the end, one or more controversial speakers take the stage and provide an incredible amount of insight for the folks who haven't left early.

All three of the last three speakers were fascinating, but it wasn't until I wove all three speeches together that it became clear to me why innovation seems to evaporate the larger a company becomes. I was drawn in particular to why Netscape failed and Google, outside of ad revenue, largely has been unsuccessful, once you factor in economics.

I'll walk you through this and then close with my product of the week: a new set of headphones from Plantronics, which have become my favorite travel headphones.

The 4 Elements of Innovation

The first speaker used, of all things, the creation of chemotherapy as his quintessential example of innovation. He told the story of how leukemia was a death sentence for children coming into the 1960s with not only a 100 percent fatality rate, but also a horrid end for each child. It was so bad that some doctors refused to see the children, he said, and nurses visiting their wards were covered with sprayed blood. It must have been incredibly difficult to see small children suffering in incredible pain, and the images no doubt deeply disturbed the hospital staff.

Apparently there were four drugs that had some success, but they were all poisons. Each had a different function, each had terrible side effects, and each was potentially deadly. All of them individually only prolonged what was a horrid experience, so many doctors refused to use any of them.

One doctor, and you can read more details here, felt that all four might work where no one had worked before. Keep in mind the patients were children, each of the drugs individually was a deadly poison, and that doctor wanted to use all four. Oh, and since there was no animal counterpart to leukemia in children, the testing would have to be on live patients.

He got very little help and was constantly threatened with termination, but he was 98 percent successful, and his work became the foundation for modern day chemotherapy.

The speaker used this example to illustrate his contention that four elements are necessary for innovation to take hold: creativity, the ability to see an alternative; conscientiousness, the ability and drive to work to completion; contentiousness, the ability to fight against a common practice; and a sense of urgency, so the task will be completed in a timely way. (I agree with three of these.)

Interestingly, he also used Steve Jobs as an example, but those of us who knew Steve knew he was neither creative (the ideas always came from someone else) nor really conscientious (he got others to do most of the work). Just ask Steve Wozniak.

Jobs was a visionary, however, and he could see the value in someone else's idea that others often could not. Also, he sure as hell was disagreeable and contentious. The Steve Jobs example suggests that all of the elements necessary for innovation don't have to reside in the same person. It should be possible to create innovative teams that would have all of those traits and end up with something amazing.

But...

Not in a Large Company

The issue is that folks who are contentious and disagreeable, who are free thinkers, don't survive in large firms. They become the nail that the rest of the firm pounds on until they either conform, die or quit. It is actually kind of hard to find visionaries who aren't CEOs for the same reason.

Largely, they are forced to fit inside the visions of someone else, and I think that is why most large firms have to acquire much of their innovative technology after a while. It is why Xerox PARC could create the graphical user interface and mouse, but it took Steve Jobs and Apple to bring them to market.

I recall that the first iPhone-like phone I saw was created at Palm, and that group quickly was disbanded after being shot down by Palm's then-CEO for having a stupid idea. It didn't conform. Even at Apple, the iPhone required Steve seeing the threat of a music-playing phone to convince him to pioneer and then husband the product to market.

Microsoft also had a group that created an iPhone before Apple and even created a better tablet than the iPad, called the "Courier," and both were killed before ever making it to market. It wasn't that those firms didn't have people who could innovate -- they just treated them like problems, and instead of blessing and driving the related innovations, they forced them out of the company.

Google's Approach

As the Dell World speakers continued, one of the other things that became clear was that the reason Google largely has been a copycat is that it lost track of its identity. The second speaker, talking about coming innovations, showcased a list of cutting-edge firms -- all of which were created by directing people toward something the firm didn't own and monetizing it.

Facebook didn't own the content, Uber didn't own the cars, and Airbnb doesn't own the properties. However, Google was the king of monetizing what it didn't own, and that was its entire model for achieving success.

The implication was that had Google realized what it was best at -- monetizing access -- then it would have created its own Facebook, Uber and Airbnb. Instead, it tried to copy Apple, Microsoft and eventually Facebook, but none of those endeavors has been particularly successful financially, and some have cut into their revenue and added to costs. For example, both Apple and Microsoft could have been partners instead of rivals.

I recall one of IBM cofounder Thomas Watson Jr.'s saying: "Be willing to change everything but who you are." I think Google's -- now Alphabet's -- problem is that it no longer knows what it is.

Wrapping Up

Overall, the Dell World talks left me with two lessons.

One, that if you want innovation you have to identify those who are likely to innovate, and then back and protect them. Truly consider the concept of Skunk Works, (which has resulted in some of the most innovative products ever created) and the new policies at Ford, which expressly protect free thinkers.

Two, that if you don't know what your core skill is, then you are likely to fail a lot. I could go down a list of companies, starting with Netscape and ending with Yahoo, that just forgot who they were and either failed or are in the process of failing as a result.

This suggests two other things: If you are a creative free thinker, then you don't want to work for a big company that won't protect you; and one of the first things you should ask when considering a new job is whether the firm knows what its core skill is -- in other words knows more about what it is than its name suggests.

Something to noodle on this week.

Rob Enderle

I was a huge fan of the original BackBeat headphones because they were comfortable, had decent active noise cancellation, and really good battery life. The problem was they were really big, and I lost two pairs of them taking them out of my backpack to get something else and forgetting to put them back in.

At something like US$250 each that got old really, really fast.

Well, Plantronics just released the second generation, and they not only are smaller, allowing me to work around them and not take them out of my backpack, but also cheaper, coming in at a bit more reasonable price of $199.99.

Plantronics BackBeat Pro 2
Plantronics BackBeat Pro 2
I've been carrying them on my last two trips -- they have worked flawlessly, and I haven't come close to losing them.

Even though they are smaller, they cut out the noise on a plane just as well, and I've been burning through a ton of old and new TV shows and several hit movies as a result.

Because I'm a longtime fan of BackBeat and I'm less likely to lose these, and they are less likely to break me if I do, the new Plantronics BackBeat Pro 2 headphones are my product of the week. (Now if I could just get them in black not brown... .)


Saturday, October 22, 2016

DDoS Attack Causes Waves of Internet Outages

10:45 PM

Hundreds of websites -- including those of biggies such as Netflix, Twitter and Spotify -- on Friday fell prey to massive DDoS attacks that cut off access to Internet users on the East Coast and elsewhere across the United States.

Three attacks were launched over a period of hours against Internet performance management company Dyn, which provides support to eight of the top 10 Internet service and retail companies and six of the top 10 entertainment companies listed in the Fortune 500.

The first attack against the Dyn Managed DNS infrastructure started at 11:10 a.m. UTC, or 7:10 a.m. EDT, the company said. Services were restored at about 9:00 a.m. Eastern time.

The second attack began around 11:52 a.m. EDT and was resolved by 2:52 p.m. The third attack, which started around 5:30 p.m., was resolved by about 6:17 p.m., according to Dyn's incident report.

"This is a new spin on an old attack, as the bad guys are finding new and innovative ways to cause further discontent," said Chase Cunningham, director of cyberoperations for A10 Networks.

"The bad guys are moving upstream for DDoS attacks on the DNS providers instead of just on sites or applications."

Dyn "got the DNS stuff back up pretty quick. They were very effective," he told TechNewsWorld.

The Severity of the Attacks

While the attacks were "pretty large," they "didn't bring anything down for very long," Cunningham noted.

Still, without confirmation from Dyn or ISPs, "it's only possible to speculate on the severity of this attack," said Craig Young, a computer security researcher at Tripwire.

"It is, however, reasonable to assume that the attackers controlled a considerable bandwidth in order to take out a service known for its resiliency against this type of attack," he told TechNewsWorld.

Getting the bandwidth to launch the attack has become easier with the proliferation of the Internet of Things. Cybercriminals and hackers increasingly have roped IoT devices into service as botnets to launch successive waves of very large DDoS attacks.

"Threat actors are leveraging insecure IoT devices to launch some of history's largest DDoS attacks," A10's Cunningham noted.

Manufacturers should eliminate the use of default or easy passwords to access and manage smart or connected devices, he said, to "hinder many of the global botnets that are created and deployed for malicious use."

Who's Pulling the Strings?

A nation state or states might be preparing to take down the Internet, cybersecurity expert Bruce Schneier recently warned, and "if there's a threat actor out there with this goal, DNS infrastructure would be a very natural target to expect," Tripwire's Young pointed out.

Another possibility is that the attacks could be a publicity stunt for a new threat actor launching a DDoS as a Service business, he suggested, in which case someone will claim responsibility for the attacks "in coming days or weeks."

Nothing points to one particular group, although it appears that recently more attacks have been coming from South America than from Russia or the former Soviet bloc, A10's Cunningham said.

At this point, considering the source "is total speculation," he added.

The United States Department of Homeland Security reportedly is looking into the attacks.

The explanation may turn out to be simple. Perhaps Dyn's DNS servers were too tempting a target for hackers and led to an attack of opportunity.

...BIND9 is 100 to 1000 times slower than an ideal DNS server, so is much harder to keep up in the face of DDoS.

— Robert Graham ❄ (@ErrataRob) October 21, 2016

Bind is an open source reference implementation of DNS protocols, as well as production-grade software suitable for use in high-volume, high-reliability applications.

More Trouble Ahead

DDoS attacks have been on the upswing and likely will increase in the near term.

There was a 129 percent increase in year-over-year DDoS attack traffic in the second quarter of this year, according to Akamai.

That amounts to nearly 5,000 mitigated attacks across a variety of industries and verticals during the period.


Linux Foundation Spurs JavaScript Development

1:44 AM

The Linux Foundation earlier this week announced the addition of the JS Foundation as a Linux Foundation project. The move is an effort to inject new energy into the JavaScript developer community.

Linux Foundation Spurs JavaScript Development

By rebranding the former JQuery foundation as the JS Foundation and bringing it under the Linux umbrella, officials hope to create some stability and build critical mass. The goal is to spark greater interest in pursuing open source collaboration by intermingling some promising new players with some venerable stalwarts.

"What we hear is a need for a center of gravity in the JavaScript ecosystem and that's what we're hoping to create via the JS Foundation," said Kris Borchers, executive director of the JS Foundation.

"We want to drive the adoption and development of JavaScript technologies, and provide an environment that facilitates collaboration and encourages community for any project that drives innovation forward," he told LinuxInsider.

Joining Forces

A number of initial projects will participate in a new mentorship program that is designed to encourage a level of collaboration and sustainability heretofore lacking. They include Appium, Interledger.js, JerryScript, Mocha, Moment.js, Node-RED and webpack.

Founding members of the JS Foundation include Bocoup, IBM, Ripple, Samsung, Sauce Labs, Sense Technic Systems, SitePen, Stackpath, University of Westminster and WebsiteStartup.

Although the communities are very different, they have a mutual interest in boosting support for their respective technologies.

"Javascript has suffered from a reduced interest of late, and they likely couldn't sustain by themselves anymore," suggested Rob Enderle, principal analyst at the Enderle Group.

That is likely what drove the consolidation, he said.

"A large number of folks in both camps are volunteers, and with a severe shortage of programming talent in paid jobs in the industry, I suspect both thought they could better sustain critical mass together rather than separately," Enderle told LinuxInsider.

One of the things Javascript users want is for the projects they're using to be dependable, said Jonathan Lipps, director of open source at Sauce Labs.

Everyone loves to hate "javascript fatigue," he told LinuxInsider.

"How much worse does that fatigue become when a project which has a lot of adoption all of a sudden loses its contributors, and all of the users are forced to migrate to something else?" Lipps asked.

One of the goals of the JS Foundation is to create a level of stability in the ecosystem that heads off that scenario.

"I think we'll also see as a result a counterforce to the fragmentation trend." said Lipps. "If we can get projects working together and collaborating under a nonprofit umbrella, maybe we'll see more of them joining forces and providing the users with fewer, more sustainable choices."

More Exposure, More Adoption

A new level of cooperation could pay dividends for Sauce Labs by encouraging wider adoption of its Appium platform. The company's goal is for Appium to become the industry's most popular mobile automation tool.

"Donating Appium to the JS Foundation is a great way to shove Appium even further into view for more developers," Lipps said.

"From a development standpoint, specifically, we hope that giving up Appium's copyright to a nonprofit will encourage other companies who make money off of Appium to be less shy about contributing code to it," he explained.

Another of the initial projects in the program is JerryScript, a lightweight JavaScript engine first developed by Samsung. It can enable smartwatches, wearables and other small devices to operate across an IoT environment, noted Youngyoon Kim, vice president of the Software R&D Center at Samsung.

IBM's Node-RED, another participant, has achieved widespread adoption in the IoT community, noted Angel Diaz, vice president of cloud technology and architecture, allowing users to innovate IoT applications more rapidly and with greater agility.


Tesla: Everyone Gets a Self-Driving Car

1:44 AM

Tesla on Wednesday announced plans to install hardware that will allow all of its cars to become driverless.

Tesla: Everyone Gets a Self-Driving Car

The equipment will enable self-driving at a safety level substantially greater than human-driven cars, according to the company.

The hardware includes eight cameras to provide 360-degree visibility around the car for more than 800 feet; 12 ultrasonic sensors to detect hard and soft objects; and forward-facing radar capable of seeing through rain, fog, dust and other vehicles.

Tesla also will install a new onboard computer with 40 times the computing power of previous Tesla models. It will run Tesla's neural net for processing information from the other hardware components.

Feature Suspension

Before activating the new hardware, Tesla will be calibrate it using information gathered from millions of miles of its vehicles' real-world driving experience.

During the transition, Teslas that have the new hardware will not have some first-generation Autopilot features, such as automatic emergency braking, collision warning, lane holding and active cruise control. The company will validate and then re-enable them over the air, along with new features, the company said.

Tesla's announcement reflects a thoughtful approach to automated driving, said Richard Wallace, transportation systems analysis director at the Center for Automotive Research.

By making the cars driverless-ready, Tesla easily can turn them into fully automated vehicles via an over-the-air software update.

"That's an advantage Tesla has, because not every car company can do that," Wallace told TechNewsWorld. "It's a sound strategy, and I wouldn't be surprised if some other OEMs decide to follow it."

Doubling Down

Clearly, Tesla is doubling down on its self-driving bet in the belief that the technology represents the future of consumer and commercial vehicles, said Charles King, principal analyst at Pund-IT.

"What's particularly interesting is the company's evolutionary approach -- equipping its cars with the necessary hardware, but stating that various self-driving functions will be enabled by software updates after they are fully validated," he told TechNewsWorld.

"That's a bit counterintuitive, given the tendency among many folks to prefer instant gratification, but it emphasizes the fact that autonomous driving is still a work in progress," King remarked.

"Bottom line -- it's wise of Tesla to acknowledge and to follow a safely incremental path forward," he said.

The Tesla Way

Tesla's attitude toward vehicle automation differs from other major players in the space, including the major auto makers and Google.

Both the auto makers and Google are taking a more cautious approach to the technology, King said.

Some auto makers are concentrating their efforts on commercial and industrial uses rather than consumer products.

"That's sensible, since self-driving features are likely to first emerge as pricy options rather than the standard features that Tesla is offering," King noted.

Meanwhile, Google has been deeply leveraging other companies' technologies and efforts in its driverless vehicle.

"In contrast, Tesla's decision to equip its cars with features that it called 'Autopilot' was more than a little hyperbolic. 'Driver Assist' would have been more accurate and less prone to misinterpretation," King said.

Risky Gambit

Making vehicles driverless-ready can give Tesla a first-mover advantage, but it also carries some risks.

"You're pretty much stuck with the hardware you put out there. You're telling your customers that purely through software, you're going to raise their capabilities," CAR's Wallace said.

"If they've played their cards right and they have the necessary suite of sensors, then this strategy is great for them," he continued. "If they're missing something on the sensor side, then this strategy is going to always leave them a little bit short."

Assuming risk comes with the territory of being a first mover, noted Mark Duvall, director of the energy utilization group at the Electric Power Research Institute.

"Building automobiles is a very high-risk business," he told TechNewsWorld, "so it's hard to say if what Tesla announced has a higher risk to what they're doing today. A lot of that will depend on execution."

Government Hangups

Government regulations also could challenge Tesla's driverless plans.

"We aren't all going to suddenly stop driving," Duvall said. "It will be a continuum."

Regulation of self-driving vehicles could vary from state to state, added Jim McGregor, principal analyst at Tirias Research.

"There's less than a handful of states that allow self-driving cars," he told TechNewsWorld. "What happens if Tesla enables its self-driving feature and a state doesn't allow it? They may be jumping the gun here. They may be getting ahead of themselves."


Friday, October 21, 2016

Microsoft AI Beats Humans at Speech Recognition

4:43 AM

Microsoft's Artificial Intelligence and Research Unit earlier this week reported that its speech recognition technology had surpassed the performance of human transcriptionists.

The team last month published a paper describing its system's accuracy, said to be superior to that of IBM's famed Watson artificial intelligence.

The error rate for humans on the widely used NIST 2000 test set is 5.9 percent for the Switchboard portion of the data, and 11.3 percent for the CallHome portion, the team said.

The team improved on the conversational recognition system that outperformed IBM's by about 0.4 percent, it reported.

That improvement is important, noted Anne Moxie, senior analyst at Nucleus Research.

While speech recognition provides an easier way for humans to interact with technology, "it won't see adoption until it has extremely low error rates," she told TechNewsWorld.

Google, IBM and Microsoft are among the companies working on speech recognition systems, but Microsoft is the closest to overcoming the error rate issue, Moxie said. "Therefore, its technology's the most likely to see adoption."

Testing the Technology

The team's progress resulted from the careful engineering and optimization of "convolutional and recurrent neural networks." The basic structures have long been well known but "it is only recently that they have emerged as the best models for speech recognition," its report states.

To measure human performance, the team leveraged an existing pipeline in which Microsoft data is transcribed weekly by a large commercial vendor performing two-pass transcription -- that is, a human transcribes the data from scratch, and then a second listener monitors the data to perform error correction.

The team added NIST 2000 CTS evaluation data to the worklist, giving the transcribers the same audio segments as provided to the speech recognition system -- short sentences or sentence fragments from a signal channel.

For the speech recognition technology, the team used three convolutional neural network (CNN) variants.

One used VGG architecture, which employs smaller filters, is deeper, and applies up to five convolutional layers before pooling.

The second was modeled on the ResNet architecture, which adds a linear transform of each layer's input to its output. The team applied Batch Normalization activations.

The third CNN variation is the LACE (layer-wise context expansion with attention) model. LACE is a time delay neural network (TDNN) variant.

The team also trained a fused model consisting of a combination of a ResNet and a VGG model at the senone posterior level. Senones, which are states within context-dependent phones, are the units for which observation probabilities are computed during automated speech recognition (ASR).

Both base models were independently trained and the score fusion weight then was optimized on development data.

A six-layer bidirectional LSTM was used for spatial smoothing to improve accuracy.

"Our system's performance can be attributed to the systematic use of LSTMs for both acoustic and language modeling as well as CNNs in the acoustic model, and extensive combination of complementary models," the report states.

The Microsoft Cognitive Toolkit

All neural networks in the final system were trained with the Microsoft Cognitive Toolkit (CNTK) on a Linux-based multi-GPU server farm.

CNTK is an open source deep learning toolkit that allows for flexible model definition while scaling very efficiently across multiple GPUs and multiple servers, the team said.

Microsoft earlier this year released CNTK on GitHub, under an open source license.

The Voice

"Voice dictation is no longer just being used for composing text," said Alan Lepofsky, a principal analyst at Constellation Research.

"As chat-centric interfaces become more prevalent, core business processes such as ordering items, entering customer records, booking travel, or interacting with customer service records will all be voice-enabled," he told TechNewsWorld.

To illustrate his point, Lepofsky noted that he had composed his response and emailed it to TechNewsWorld "simply by speaking to my iPad."