Sunday, April 30, 2017

Duty Calls Popular Shooter Back to World War II

Call of Duty will return to its World War II roots when the latest title arrives this November, publisher Activision Blizzard announced on Wednesday. The franchise has been a steady hit maker since its 2003 debut. Various titles in the series have become regular staples in e-sports tournaments and Major League Gaming.

Call of Duty is one of the most popular series in video game history, selling a combined 175 million copies -- second only to Take 2's Grand Theft Auto franchise. The game has passed US$11 billion in total lifetime revenue.

Although the series has been a robust seller for Activision Blizzard, the critical response to recent titles hasn't been as strong. In Call of Duty: WWII, the game once again set its sights on the military exploits of the Greatest Generation, including the D-Day landings.

The upcoming game will feature both a single player campaign, which will allow players to take the role of various characters at different points in the war, and an online multiplayer mode. The campaign will take place largely in the latter stages of the war, as the Allies liberate Nazi-occupied Europe.

Based on what has been announced so far, it seems that players will assume the role of Private Ronald "Red" Daniels of the American 1st Infantry Division. They can opt to fill the shoes of a British soldier as well, and even play as a member of the French resistance.

Sledgehammer Games, an independent but wholly owned subsidiary of Activision, is the developer of Call of Duty: WWII. The studio codeveloped Call of Duty: Modern Warfare 3 with Infinity Ward, and it also developed Call of Duty: Advanced Warfare.

The game has not yet been rated.

Every Bullet Counts

Sledgehammer Games has promised that Call of Duty: WWII will be different from other shooters, as it attempts to reimagine the traditional first-person shooter model. Notably, the game will have a darker tone that doen't shy away from the graphic horrors of war, and it apparently will ditch one of the genre's longstanding features, namely the health-regeneration systems.

This means every bullet could have the character's name on it, and while technically it won't mean "game over," it should change the gaming experience.

"This is going to be a more hardcore game, where the skill curve is greater and the challenges harder," said Roger Entner, principal analyst at Recon Analytics.

"The graphics will also be more realistic, and that could make the violence seem even more intense," he told TechNewsWorld.

"However, we have to realize that this is a title aimed at mature gamers, and today the average age for those gamers is in their 30s," Entner added. "This isn't a game for kids, but gamers are more than kids today."

Return to Duty

The original Call of Duty, which came on the heels of Electronic Arts' Medal of Honor and Battlefield 1942 titles, scored a direct hit when it launched in 2003. Gamers couldn't seem to get enough World War II action.

"The original Call of Duty was certainly helped by those other titles -- but really by the number of World War II movies that brought the conflict to the mainstream again," said Entner. "It was the beginning of the war on terror, and the game combined elements of heroism and patriotism, and people wanted to play out the exploits of these heroes."

As the very real war on terror drags on without an end in sight, the modern conflict setting may have run its course, but the question is whether today's gamers will be interested in returning to World War II again.

Call of Duty won't be the first franchise to jump back to an earlier era. EA's Battlefield 1 last fall proved gamers may have interest in period pieces. That game was one of the top sellers of last year, suggesting that fans may be ready for something different -- but there are still risks.

"Games, like movies, have a huge upfront cost, and this is why we see so many sequels," said Entner. "At the same time, the sequels need to be more than just more of the same."

History vs. Gaming

It's possible some people with an interest in history will be drawn back to games, thanks to this World War II setting.

"I did witness a spurt of WWII awareness during the original Call of Duty's popularity that I would attribute directly to the game," said John Adams-Graf, editor of Military Trader, a publication devoted to military memorabilia and historical re-enactors.

"While its release was about the same time as the epic WWII film, Saving Private Ryan, followed by the HBO series, Band of Brothers, the level of interest in WWII small arms that I witnessed at that time probably had more to do with the game than with Hollywood," Adams-Graf told TechNewsWorld.

The game could heighten awareness of the historical event as well.

"The depth of detail and in-game resources will give any gamer immediate access to more information about WWII, the men and women who served, and the weapons that were used," he added.

Total War

Even though this game promises greater realism, thanks to improved graphics and gameplay, Activision and developer Sledgehammer Games may be ensuring that it has something for everybody.

Few details were provided, but based on the trailer and the official website, Call of Duty: WWII could include the popular "zombie" cooperative mode. This has been a regular staple of the series since 2008, with the release of Call of Duty: World at War, appearing in the subsequent Call of Duty: Black Ops titles.

"This helps give the game a low barrier of entry, and promises to have something for different tastes," noted Recon Analytics' Entner.

"This is for those who are in a beer-and-pretzels mood and want silly fun," he said. "It is a way to get away from the realism that the other half of the game offers."


,

OPINION Our Sci-Fi Future: Silly vs. Terrifying

The future is now, or at least it is coming soon. Today's technological developments are looking very much like what once was the domain of science fiction. Maybe we don't have domed cities and flying cars, but we do have buildings that reach to the heavens, and drones that soon could deliver our packages. Who needs a flying car when the self-driving car -- though still on the ground -- is just down the road?

The media often notes the comparisons of technological advances to science fiction, and the go-to examples cited are often Star Trek, The Jetsons and various 1980s and 90s cyberpunk novels and similar dark fiction. In many cases, this is because many tech advances actually are fairly easy comparisons to what those works of fictions presented.

On the other hand, they tend to be really lazy comparisons. Every advance in holographic technology should not immediately evoke Star Trek's holodeck, and every servant-styled robot should not immediately be compared to Rosie, the maid-robot in The Jetsons.

Cool the Jetsons

The Jetsons is one of the cultural references cited most often when new technology emerges. Perhaps this is because so many of today's tech reporters viewed the shows in syndication or reruns -- or perhaps it's just because the comparisons are so easy to make.

Comedian John Oliver mocked TV news programs for their frequent references to The Jetsons when reporting developments in flying cars, video phones and other consumer technologies. Yet, the Smithsonian magazine several years ago published a piece titled "50 Years of the Jetsons: Why The Show Still Matters," casting it as part of the golden age of futurism.

Likewise, Diply.com noted "11 Times The Jetsons Totally Predicted The Future," while AdvertisingAge asked, "How Much of the Jetsons' Futuristic World Has Become a Reality?"

First, the show shouldn't matter. It was a cartoon and predicted the future about as accurately as The Flintstones depicted life in the Stone Age. None of The Jetsons' future tech predictions were revolutionary.

Flying cars, flat panel TVs, video phones and robot maids often are called out -- yet all of those innovations were envisioned decades before the TV show hit the airwaves. Science fiction writer H.G. Wells suggested we might have small personal aircraft in his work The Shape of Things to Come -- and we're not much closer to having them now than when he came up with the idea in 1933.

Flat panel TVs, on the other hand, were in development in the early 1960s around the same time the show was on the air, and were envisioned much earlier.

As for video phones, the concept of videotelephony initially became popular in the late 1870s, and Nazi Germany developed a working system in the 1930s. Of course, it is likely more uplifting to compare today's modern tech to The Jetsons than to give the Nazis credit.

As for the robot maid, let's not forgot that "robots" were introduced in the 1920 play Rossum's Universal Robots, by Czech writer Karel ÄŒapek, which told of a class of specially created servants. They may not technically have been machines, but they were there to do the jobs people didn't want to do.

Boldly Going the Same Place

Star Trek is another often-referenced show whenever the next cool technology looks anything like its holodeck or replicator technology.

Perhaps this is because it is easy to compare today's 3D printers to replicators, even if the principles underlying them are entirely different. It is equally easy -- lazy, actually -- to suggest that holographic technology eventually could -- one day in the far future -- create a holodeck like the one on Star Trek.

The problem is that beneath the surface, much of the technology seen on the show is different from our current technology and really borders on the impossible. The replicator breaks things down not to the molecular level but to the atomic level.

That is as unlikely ever to become reality as the show's transporter technology -- but that didn't stop the comparisons to the show's magic means of "beaming" when German scientists created a system of scanning an object and recreating it elsewhere.

That isn't quite the same. The German invention basically combines 3D printing with a fax machine. That is hardly deserving of a "Beam me up, Scotty" reference -- even though hardcore fans will happily tell you that exact phrase never has been uttered in any of the TV shows or movies.

Snow Crash and Burn

Many of today's Internet-based technologies -- including 3D and virtual reality, as well as various online worlds -- are compared to similar tech in the works of William Gibson, Bruce Sterling and Neal Stephenson.

In the past 20 years, there have been many suggestions that various worlds that have emerged online -- from Worlds Inc. in 1995 to Second Life in 2005 -- were close to a "real life Snow Crash," the book that introduced readers to the "Metaverse."

The Metaverse is an online world populated with 3D avatars who can mingle, chat and engage with one another while their real-life users are in distant places around the globe.

Yes, that does sound much like today's online video games and the virtual world Second Life. The amazing part in these comparisons isn't that as a science fiction writer and visionary, Stephenson was so right about where we were headed -- but rather than anyone would gleefully laud the fact that we're closing in on that reality!

The worlds presented in Snow Crash and the works of Gibson and Sterling aren't exactly utopias. These guys didn't write stories set in perfect worlds free of crime, disease and war. Rather, they are worlds where criminals and giant corporations work hand in hand, as governments have fallen. People are desperate and downtrodden, and they escape to virtual realities because their real lives are awful.

Case, the main character in Gibson's seminal book Neuromancer, is a "console cowboy" -- basically a hacker by another, more creative name. The character is down on his luck, and by taking on a job no one wants, he is able to get his life back on track and get out of the game. Case's story makes for fine reading -- but he's an antihero at best.

He's a criminal who works for bigger criminals. Do we really want to live in such a world -- where a few powerful criminals and corporations rule the world?

When executives at Facebook's Oculus VR cite Snow Crash and the Metaverse, one must wonder what part of the vision of the future they're excited about. It would be akin to developing a robot with true artificial intelligence and proclaiming "it's just like the Terminator!

This might change if a film version of Snow Crash ever materializes -- but, for the record, I'm happy that attempts to bring Neuromancer to the big screen have failed. We really don't need another Gibson story to suffer the fate of Johnny Mnemonic.

Brave New World

Cyberpunk stories typically are set in worlds in which corporations control innovation, launch satellites to connect the world to spread information, and maintain private armies. That vision is a lot darker than The Jetsons and Star Trek, of course, but it also is much closer to our reality.

Are there now giant corporations that literally control how we access the Internet? Or others that have what appear to be bizarre projects to further spread their reach to remote parts of the world? Doesn't that pretty much describe tech giants Comcast and Facebook? Or Google?

Now imagine a tech visionary who is a billionaire with a private space program, who also is developing new ways to harness solar power. There have been James Bond books and movies with such villains (see Moonraker and The Man With the Golden Gun) -- but that description also fits Elon Musk. By no means am I suggesting that Musk is a Bond villain, but I'm not the only one to notice those similarities.

Then there is the fact that Musk actually bought the Lotus Esprit submarine car that was used in the film The Spy Who Loved Me for nearly US$1million. Now tell me, that isn't something a Bond villain would do?

The big takeaway is that perhaps it is too easy to see today's world in past entertainment vehicles and ignore the fact that much of what The Jetsons predicted was completely wrong. We don't expect to see a flying car that can fold up into a briefcase, ever.

Likewise, teleportation and warp speed likely will remain just part of the mythos of Star Trek.

When it comes to the darker side of science fiction, we should be cautious about where we are headed. We should heed it as a portent of what to avoid -- not a future we should embrace.

As for Musk, let's just hope a super spy is on standby.


,

HOW TO Linux's Big Bang: One Kernel, Countless Distros

Even if you're a newcomer to Linux, you've probably figured out that it is not a single, monolithic operating system, but a constellation of projects. The different "stars" in this constellation take the form of "distributions," or "distros." Each offers its own take on the Linux model.

To gain an appreciation of the plethora of options offered by the range of distributions, it helps to understand how Linux started out and subsequently proliferated. With that in mind, here's a brief introduction to Linux's history.

Linus Torvalds, Kernel Builder

Most people with any familiarity with Linux have heard of its creator, Linus Torvalds (pictured above), but not many know why he created it in the first place. In 1991, Torvalds was a university student in Finland studying computers. As an independent personal project, he wanted to create a Unix-like kernel to build a system for his unique hardware.

The "kernel" is the part of an operating system that mediates between the hardware, via its firmware, and the OS. Essentially, it is the heart of the system. Developing a kernel is no small feat, but Torvalds was eager for the challenge and found he had a rare knack for it.

As he was new to kernels, he wanted input from others to ensure he was on the right track, so he solicited the experience of veteran tinkerers on Usenet, the foremost among early Internet forums, by publishing the code for his kernel. Contributions flooded in.

After establishing a process for reviewing forum-submitted patches and selectively integrating them, Torvalds realized he had amassed an informal development team. It quickly became a somewhat formal development team once the project took off.

Richard Stallman's Role

Though Torvalds and his team created the Linux kernel, there would have been no subsequent spread of myriad Linux distributions without the work of Richard Stallman, who had launched the free software movement a decade earlier.

Frustrated with the lack of transparency in many core Unix programming and system utilities, Stallman had decided to write his own -- and to share the source code freely with anyone who wanted it and also was committed to openness. He created a considerable body of core programs, collectively dubbed the "GNU Project," which he launched in 1983.

Without them, a kernel would not have been of much use. Early designers of Linux-based OSes readily incorporated the GNU tools into their projects.

Different teams began to emerge -- each with its own philosophy regarding computing functions and architecture. They combined the Linux kernel, GNU utilities, and their own original software, and "distributed" variants of the Linux operating system.

Server Distros

Each distro has its own design logic and purpose, but to appreciate their nuances it pays to understand the difference between upstream and downstream developers. An "upstream developer" is responsible for actually creating the program and releasing it for individual download, or for including it in other projects. By contrast, a "downstream developer," or "package maintainer," is one who takes each release of the upstream program and tweaks it to fit the use case of a downstream project.

While most Linux distributions include some original projects, the majority of distribution development is "downstream" work on the Linux kernel, GNU tools, and the vast ecosystem of user programs.

Many distros make their mark by optimizing for specific use cases. For instance, some projects are designed to run as servers. Distributions tailored for deployment as servers often will shy away from quickly pushing out the latest features from upstream projects in favor of releasing a thoroughly tested, stable base of essential software that system administrators can depend on to run smoothly.

Development teams for server-focused distros often are large and are staffed with veteran programmers who can provide years of support for each release.

Desktop Distros

There is also a wide array of distributions meant to run as user desktops. In fact, some of the more well-known of these are designed to compete with major commercial OSes by offering a simple installation and intuitive interface. These distributions usually include enormous software repositories containing every user program imaginable, so that users can make their systems their own.

As usability is key, they are likely to devote a large segment of their staff to creating a signature, distro-specific desktop, or to tweaking existing desktops to fit their design philosophy. User-focused distributions tend to speed up the downstream development timetable a bit to offer their users new features in a timely fashion.

"Rolling release" projects -- a subset of desktop distributions -- are crafted to be on the bleeding edge. Instead of waiting until all the desired upstream programs reach a certain point of development and then integrating them into a single release, package maintainers for rolling release projects release a new version of each upstream program separately, once they finish tweaking it.

One advantage to this approach is security, as critical patches will be available faster than non-rolling release distros. Another upside is the immediate availability of new features that users otherwise would have to wait for. The drawback for rolling release distributions is that they require more manual intervention and careful maintenance, as certain upgrades can conflict with others, breaking a system.

Embedded Systems

Yet another class of Linux distros is known as "embedded systems," which are extremely trimmed down (compared to server and desktop distros) to fit particular use cases.

We often forget that anything that connects to the Internet or is more complex than a simple calculator is a computer, and computers need operating systems. Because Linux is free and highly modular, it's usually the one that hardware manufacturers choose.

In the vast majority of cases, if you see a smart TV, an Internet-connected camera, or even a car, you're looking at a Linux device. Practically every smartphone that's not an iPhone runs a specialized variety of embedded Linux too.

Linux Live

Finally, there are certain Linux distros that aren't meant to be installed permanently in a computer, but instead reside on a USB stick and allow other computers to boot them up without touching the computer's onboard hard drive.

These "live" systems can be optimized to perform a number of tasks, ranging from repairing damaged systems, to conducting security evaluations, to browsing the Internet with high security.

As these live Linux distros usually are meant for tackling very specific problems, they generally include specialized tools like hard drive analysis and recovery programs, network monitoring applications, and encryption tools. They also keep a light footprint so they can be booted up quickly.

How Do You Choose?

This is by no means an exhaustive list of Linux distribution types, but it should give you an idea of the scope and variety of the Linux ecosystem.

Within each category, there are many choices, so how do you choose the one that might best suit your needs?

One way is to experiment. It is very common in the Linux community to go back and forth between distros to try them out, or for users to run different ones on different machines, according to their needs.

In a future post, I'll showcase a few examples of each type of distribution so you can try them for yourself and begin your journey to discovering the one you like best.


,

Mobile Ubuntu Gamble to Fizzle Out in June

Canonical this week said that it will end its support for Ubuntu Touch phones and Ubuntu-powered tablets in June, and that it will shut down its app store at the end of this year. The company previously had signaled the system's demise, but it had not fixed a date.

With Ubuntu Touch, a unified mobile OS based on Ubuntu Linux, Canonical hoped to establish a marketable alternative to the Android and iOS platforms.

Unity Out

The news came on the heels of Canonical founder Mark Shuttleworth's recent disclosure that the company had decided to drop Ubuntu's Unity desktop environment as of the next major distro release. That revelation hinted that more shedding of unsuccessful technologies would follow.

Canonical will replace its shelved flagship desktop environment with GNOME 3. The Unity decision was based, at least in part, on continuing difficulties in moving it to the Mir X-window server system. That combination was integral to the Ubuntu phone and tablet development.

"Ubuntu phones and tablets will continue to operate. OTA (over-the-air) updates are currently limited to critical fixes and security patches only," said Canonical spokesperson Sarah Dickson.

The critical patching will continue until June 2017, she told LinuxInsider. Then no further updates will be delivered.

Risky Business

Canonical declined to provide details about what led to the end of the Ubuntu phone and tablet technology. However, one obvious reason is the low number of users.

"This is one of those 'What if you gave a party and nobody came?' scenarios," said Charles King, principal analyst at Pund-IT.

It is no doubt disappointing for Canonical, but it also calls into question the viability of any mobile OS that does not have a sizable following or a deep-pocketed vendor behind it, he told LinuxInsider.

"For historical analogies, the early days of the PC business before the market coalesced around Windows and the Mac OS offer some good examples," King observed.

Exit Strategy

For the time being, the app store will continue to function, said Dickson. That means users will be able to continue to download apps, and developers can continue to push updates and bug features to existing apps.

"However, it will no longer be possible to purchase apps from the Ubuntu Phone app store from June 2017," she said.

After that date, developers of paid apps already in the store will have two choices: make their apps available for free, or remove them from the store.


,

Red Hat Gives JBoss AMQ a Makeover

Red Hat on Thursday announced JBoss AMQ 7, a messaging platform upgrade that enhances its overall performance and improves client availability for developers.

JBoss AMQ is a lightweight, standards-based open source platform designed to enable real-time communication between applications, services, devices and the Internet of Things. It is based on the upstream Apache ActiveMQ and Apache Qpid community projects.

JBoss AMQ serves as the messaging foundation for Red Hat JBoss Fuse. It provides real-time, distributed messaging capabilities needed to support an agile integration approach for modern application development, according to Red Hat.

The upgrade be available for download by members of the Red Hat developers community this summer, the company said.

Technology plays a central role in enabling greater levels of interconnection and scalability, noted Mike Piech, general manager of Red Hat JBoss Middleware.

"With its new reactive messaging architecture, JBoss AMQ is well-suited to support the applications and infrastructure that can help deliver those experiences," he said.

What It Does

"Messaging," in this case, relates to commands passing between servers and client devices. Broadly speaking, this kind of technology is called "message-oriented middleware," or "MOM," said Charles King, principal analyst at Pund-IT.

"This is purely aimed at easing the lives of enterprise developers whose organizations are Red Hat customers," he told LinuxInsider. "This is especially true for those focused on IoT implementations, since AMQ 7 can open up data in embedded devices used in IoT sensors to inspection, analysis and control."

Red Hat's AMQ 7 broker component is based on the Apache ActiveMQ Artemis project. The update adds support for APIs and protocols by adding new client libraries, including Java Message Service 2.0, JavaScript, C++, .Net and Python.

Upgrade Details

JBoss AMQ 7 brings technology enhancements to three core components: broker, client and interconnect router.

The JBoss AMQ broker manages connections, queues, topics and subscriptions. It uses innovations from Artemis to provide an asynchronous internal architecture. This increases performance and scalability and enables the system to handle more concurrent connections and achieve greater message throughput.

JBoss AMQ 7 Client supports popular messaging APIs and protocols by adding new client libraries. These include Java Message Service 2.0, JavaScript, C++, .Net and Python, along with existing support for the popular open protocols MQTT and AMQP.

JBoss AMQ 7 now offers broad interoperability across the IT landscape, which lets it open up data in embedded devices to inspection, analysis and control.

The new interconnect router in JBoss AMQ 7 enables users to create a network of messaging paths spanning data centers, cloud services and geographic zones. This component serves as the backbone for distributed messaging. It provides redundancy, traffic optimization, and more secure and reliable connectivity.

Key Improvements

Two key enhancements in JBoss AMQ 7 are a new reactive implementation and interconnect features, said David Codelli, product marketing lead for JBoss AMQ at Red Hat.

The first is an asynchronous (reactive) server component, so it no longer ties up resources for each idle connection. This will make all their interactions with messaging more performant, both in terms of throughput and latency.

"It also means that more developers can use the system since some asynchronous threading models can't use traditional brokers," Codelli told LinuxInsider.

The second key enhancement is the addition of numerous benefits provided by the new interconnect feature, especially the ability to use one local service.

"Developers can connect to one local service to have access to a stream -- sending or receiving -- of data that can travel throughout their organization," Codelli said, "crossing security boundaries, geographic boundaries and public clouds without the developer having to configure anything."

Targeted Users

The messaging platform will better meet the needs of engineering teams who share data in a loosely coupled, asynchronous fashion. Similarly, operations teams will be better able to share feeds of critical data with every department in a company, even if it has a global reach.

"Think giant retailers that need to propagate product defect data as quickly and efficiently as possible," suggested Codelli.

Two more intended benefactors of the application upgrade are financial services software developers, or IT managers dealing with transportation and logistics tasks.

The interconnect feature is ideal for things like payment gateways. It is also more helpful to architects who have to contend with large amounts of disparate data, such as delays and outages, flowing over a wide area.

Cross-Platform Appeal

One of the appeals of MOM technologies and Apache ActiveMQ is that its message modules can be distributed across multiple heterogeneous platforms, Pund-IT's King said. It effectively insulates applications developers from having to be experts in enterprise operating systems and network interfaces.

"A cross-platform value proposition is central to what Red Hat is doing," King observed. "Overall, this new update to AMQ 7 is likely to be welcomed by numerous customers, especially those focused on distributed applications and IoT."


,

Saturday, April 22, 2017

Facebook's Latest Moon Shot: I Think, Therefore I Type

Facebook on Wednesday told its F8 conference audience about two new cutting-edge projects that could change the way humans engage with devices.

Over the next two years, the company will work on a new technology that will allow anyone to type around 100 words per minute -- not with fingers, but using a process that would decode neural activity devoted to speech.

What Facebook envisions is a technology that would resemble a neural network, allowing users to share thoughts the way they share photos today.

This technology also could function as a speech prosthetic for people with communication disorders or as a new way to engage in an augmented reality environment, suggested Regina Dugan, vice president of engineering at Facebook's Building8.

The other project announced at F8 would change the way users experience communication input -- that is, it would allow them to "hear" through the skin. The human body has, on average, about two square meters of skin that is packed with sensors. New technologies could use them to enable individuals to receive information via a "haptic vocabulary," Facebook said.

Putting the Best Interface Forward

About 60 employees currently are working on the Building8 projects -- among them machine learning and neural prosthetic experts. Facebook likely will expand this team, adding experts with experience in brain-computer interfaces and neural imaging.

The plan is to develop noninvasive sensors that can measure brain activity and decode the signals that are associated with language in real time.

The technology would have advance considerably before individuals would be able to share pure thoughts or feelings, but the current effort can be viewed as the first step toward that goal, said Facebook CEO Mark Zuckerberg.

Type With Your Brain

Facebook is not the only company working on technology that could allow a direct brain-computer connection. Elon Musk last month launched Neuralink, a startup dedicated to developing a "neural lace" technology that could allow the implanting of small electrodes into the brain.

One difference between Facebook's "type with your brain" concept and Musk's neural mesh project is that "Facebook wants it to be noninvasive," said futurist Michael Rogers.

"That's a tough one, because the electrical impulses in the brain are very small and the skull is not a great conductor," he told TechNewsWorld.

The nerve signals outside the skull that trigger muscles are much stronger, so reading brain waves noninvasively means filtering a lot of much-stronger noise.

"The existing 'neural cap' technologies that allegedly let wearers, say, learn to control computer games with their brains, are actually probably training them to -- unconsciously -- use their eyebrow and forehead muscles," suggested Rogers. "Interesting, but not the same as typing with your brain."

Hearing Through Touch

Just as Braille has allowed the blind to read, Facebook's technology to hear through the skin could be a game changer for those who are deaf.

"Sound through the skin sounds more like a tool for the profoundly deaf than for everyday use," observed Rogers -- and it might not be the best solution for those with such hearing problems.

"I'm still a big believer in the potential for bone conduction temples on smart glasses as a really good audio solution that doesn't require earbuds," he added.

Delivering information through this technology may be less challenging than processing it.

"It is technically easy to translate the data to a speech pattern, but the hard part is training the human to understand it," said Paul Teich, principal analyst at Tirias Research.

"Facebook can simplify the words to codes, but you are still stuck with the training to understand the code," he told TechNewsWorld.

"There is no way to immediately understand what is being sent, and this isn't very intuitive as I see it," added Teich. "With the right training, however, a coded message could be understood in real time."

Facebook's Timeline

Zuckerberg emphasized that Facebook is taking the first steps toward development of these technologies, and it could be a lot time before they have practical applications.

"Having bold visions and making ambitious predictions characterizes some of today's most regarded tech entrepreneurs," said Pascal Kaufmann, CEO of Starmind.

"There seem to be no limits, and one can easily fall prey to the belief that everything is possible," he said.

"However, climbing a high tree is not the first step to the moon -- actually, it is the end of the journey," Kaufman told TechNewsWorld.

"Despite some gradual improvements in speech recognition these days, understanding of context and meaning is still far beyond our technological capabilities. Taking a shortcut through directly interfacing human brains, and circumventing the highly complex translation from nerve signals into speech and back from speech recognition into nerve signals, I consider one of the more creative contributions in the last few months," he said.

"It is certainly an alternative to the brute force approaches and the unjustified AI hype that seems like climbing a tree rather than building a space rocket. Zuckerberg's announcement describes a space rocket; it is up to us now to develop the technology to aim for the moon," said Kaufman.

"Both of these technologies are well within reach. This kind of brain imaging has been used in academic settings for years, and scientists developed versions of 'skin hearing' devices over 30 years ago, " said Michael Merzenich, designer of BrainHQ, professor emeritus in the UCSF School of Medicine, and winner of the 2016 Kavli Prize.

"The real challenge is making this science practical. How can a brain be trained to learn to control these new devices?" he asked.

"Every new form of communication -- from writing to the telephone to the Web -- changes human culture and human brains," Merzenich told TechNewsWorld. "What impact will using these new devices have on our brains -- and our culture?"


,

Report: Commercial Software Riddled With Open Source Code Flaws

Black Duck Software on Wednesday released its 2017 Open Source Security and Risk Analysis, detailing significant cross-industry risks related to open source vulnerabilities and license compliance challenges.

Black Duck conducted audits of more than 1,071 open source applications for the study last year. There are widespread weaknesses in addressing open source security vulnerability risks across key industries, the audits show.

Open source security vulnerabilities pose the highest risk to e-commerce and financial technologies, according to Black Duck's report.

Open source use is ubiquitous worldwide. An estimated 80 percent to 90 percent of the code in today's software applications is open source, noted Black Duck CEO Lou Shipley.

Open source lowers dev costs, accelerates innovation, and speeds time to market. However, there is a troubling level of ineffectiveness in addressing risks related to open source security vulnerabilities, he said.

"From the security side, 96 percent of the applications are using open source," noted Mike Pittenger, vice president for security strategy at Black Duck Software.

"The other big change we see is more open source is bundled into commercial software," he told LinuxInsider.

The open source audit findings should be alarming to security executives. The application layer is a primary target for hackers. Thus, open source exploits are the biggest application security risk that most companies have, said Shipley.

Understanding the Report

The report's title, "2017 Open Source Security and Risk Analysis," may be a bit misleading. It is not an isolated look at open source software. Rather, it is an integrated assessment of open source code that coexists with proprietary code in software applications.

"The report deals exclusively with commercial products," said Pittenger. "We think it skews the results a little bit, in that it is a lagging indicator of how open source is used. In some cases, the software was developed within three, five or 10 years ago."

The report provides an in-depth look at the state of open source security, compliance, and code-quality risk in commercial software. It examines findings from the anonymized data of more than 1,000 commercial applications audited in 2016.

Black Duck's previous open source vulnerability report was based on audits involving only a few hundred commercial applications, compared to the 1,071 software applications audited for the current study.

"The second round of audits shows an improving situation for how open source is handled. The age of the vulnerabilities last year was over five years on average. This year, that age of vulnerability factor came down to four years. Still, that is a pretty big improvement over last year," Pittenger said.

Awareness Improving

Through its research, Black Duch aims to help development teams better understand the open source security and license risk landscape. Its report includes recommendations to help organizations lessen their security and legal risks.

"There is increased awareness. More people are aware that they have to start tracking vulnerabilities and what is in their software," said Pittenger.

Black Duck conducts hundreds of open source code audits annually that target merger and acquisition transactions. Its Center for Open Source Research and Innovation (COSRI) revealed both high levels of open source use and significant risk from open source security vulnerabilities.

Ninety-six percent of the analyzed commercial applications contained open source code, and more than 60 percent contained open source security vulnerabilities, the report shows.

All of the targeted software categories were shown to be vulnerable to security flaws.

For instance, the audit results of applications from the financial industry averaged 52 open source vulnerabilities per application, and 60 percent of the applications were found to have high-risk vulnerabilities.

The audit disclosed even worse security risks for the retail and e-commerce industry, which had the highest proportion of applications with high-risk open source vulnerabilities. Eighty-three percent of audited applications contained high-risk vulnerabilities.

Report Revelations

The status of open source software licenses might be even more troubling -- the research exposed widespread conflicts. More than 85 percent of the applications audited had open source components with license challenges.

Black Duck's report should serve as a wake-up call, considering the widespread use of open source code. The audits show that very few developers are doing an adequate job of detecting, remediating and monitoring open source components and vulnerabilities in their applications, observed Chris Fearon, director of Black Duck's Open Source Security Research Group, COSRI's security research arm.

"The results of the COSRI analysis clearly demonstrate that organizations in every industry have a long way to go before they are effective managing their open source," Fearon said.

The use of open source software is an essential part of application development. Some 96 percent of scanned applications used open source code. The average app included 147 unique open source components.

On average, vulnerabilities identified in the audited applications had been publicly known for more than four years, according to the report. Many commonly used infrastructure components contained high-risk vulnerabilities.

Even versions of Linux Kernel, PHP, MS .Net Framework, and Ruby on Rails were found to have vulnerabilities. On average, apps contained 27 vulnerable open source components.

Significant Concerns

Many of the points Black Duck's report highlights are longstanding issues that haven't registered a negative impact on open source to any great degree, observed Charles King, principal analyst at Pund-IT.

"The findings are certainly concerning, both in the weaknesses they point to in open source development and how those vulnerabilities are and can be exploited by various bad actors," he told LinuxInsider.

With security threats growing in size and complexity, open source developers should consider how well they are being served by traditional methodologies, King added.

Illegal Code Use

The illegal use of open source software is prevalent, according to the report, which may be attributed to the incorrect notion that anything open source can be used without adhering to licensing requirements.

Fifty-three percent of scanned applications had "unknown" licenses, according to the report. In other words, no one had obtained permission from the code creator to use, modify or share the software.

The audited applications contained an average of 147 open source components. Tracking the associated license obligations and spotting conflicts without automated processes in place would be impossible, according to the report.

Some 85 percent of the audited applications contained components with conflicts, most often violations of the General Public License, or GPL. Three-quarters of the applications contained components under the GPL family of licenses. Only 45 percent of them were in compliance.

Open source has become prominent in application development, according to a recent Forrester Research report referenced by Black Duck.

Custom code comprised only 10-20 percent of applications, the Forrester study found.

Companies Ignore Security

Software developers and IT staffers who use open source code fail to take the necessary steps to protect the applications from vulnerabilities, according to the Black Duck report. Even when they use internal security programs and deploy security testing tools such as static analysis and dynamic analysis, they miss vulnerable code.

Those tools are useful at identifying common coding errors that may result in security issues, but the same tools have proven ineffective at identifying vulnerabilities that enter code through open source components, the report warns.

For example, more than 4 percent of the tested applications had the Poodle vulnerability. More than 4 percent had Freak, and more than 3.5 percent had Drown. More than 1.5 percent of the code bases still had the Heartbleed vulnerability -- more than two years after it was publicly disclosed, the Black Duck audits found.

Recommended Actions

Some 3,623 new open source component vulnerabilities were reported last year -- almost 10 vulnerabilities per day on average, a 10 percent increase from the previous year.

That makes the need for more effective open source security and management more critical than ever. It also makes the need for greater visibility into and control of the open source in use more essential. Detection and remediation of security vulnerabilities should be a high priority, the report concludes.

The Black Duck audit report recommends that organizations adopt the following open source management practices:

  • take a full inventory of open source software;
  • map open source to known security vulnerabilities;
  • identify license and quality risks;
  • enforce open source risk policies; and
  • monitor for new security threats.


,

Google Spins VR Experiences on the Web

Google on Thursday announced compatibility of its WebVR on Chrome with its low-cost Google Cardboard virtual reality system. It also launched WebVR Experiments, an online showcase for virtuality reality content in development.

WebVR became available on Daydream-ready phones earlier this year.

The newly launched WebVR Experiments are essentially proof-of-concept offerings -- from simple VR games such as Konterball (ping pong) to The Musical Forest, which lets users around the world tap or click on objects to make sounds.

Chrome desktop support for virtual reality headsets such as Oculus Rift and HTC Vive is in development and will be available soon, Google said. In the meantime, those without a Cardboard or Daydream unit can view the WebVR Experiments in 2D on a desktop PC or a handset.

Google has invited users to submit their own projects, which could be featured in the WebVR gallery.

VR for the Mainstream

As VR is still very much a new -- and arguably cutting-edge -- technology, Google's efforts appear to be aimed toward attracting a wider audience that may not be ready to spend big money to experience it yet.

"It isn't as much about mainstream acceptance -- it is simply about trying to scale mainstream awareness and exposure," explained Paul Teich, principal analyst at Tirias Research.

"The sales cycle always starts with awareness, then moves to consideration," he told TechNewsWorld.

"In this way, WebVR Experiments is the consumer friendly and social front-end to the fairly dry WebVR information site, as well as the work-in-progress deep-geek WebVR developer site," added Teich.

Taking the Low Road

Instead of appealing to early adopters with expensive hardware promising the next big thing, Google's is aiming to win over the masses via low-cost solutions.

"Google took what is an interesting approach to VR, and rather than going the high-end route showcased by HTC and Facebook -- with Oculus Rift -- they started at the other end with smartphones and a cheap headset," said Rob Enderle, principal analyst at the Enderle Group.

Whether this strategy is the right one is yet to be determined.

"Google's approach could get to volume, but the experience was low quality," Enderle told TechNewsWorld, "while HTC and Facebook had better quality, but the price point kept them from critical mass -- and the necessary tethers were dangerous."

However, over time Google's quality has improved and there has been a better focus to showcase the technology, while similar focus and improvement haven't yet been seen from the higher-priced alternatives.

"As a result, Google's side appears to be closer to success, and this latest step is another example of that progress," added Enderle.

Seeing Is Believing

As with many forms of visual technology that have come before -- notably HDTV -- people have to see it to know what they might be missing. The issue is even more amplified with VR. Unlike higher-definition TV, which can be explained as something that "looks better," VR is hard to explain if you haven't seen it.

"The challenge for VR is that most online consumers have not been exposed to it yet. If they have heard of VR, they don't know what it really is, and you certainly don't have the urge to upgrade all of that as the capabilities and depth of experiences improve," said Tirias Research's Teich.

"If you don't know what you're missing out on, then you don't have an urge to buy the equipment and rent the experiences and services," he said.

This is why Google is ensuring that people both understand what VR is and -- more importantly -- can access it in its early days.

"The first task is to educate and show people what they are missing, so if consumers like what they see on a 2D display, they might buy a cheap Cardboard clone or knockoff with their current cheap smartphone," suggested Teich.

"This is also about showcasing other people's stuff, and what you can do in the sandbox to create VR experiences," said Roger Kay, principal analyst at Endpoint Technologies Associates.

"The truth is that VR needs help; there has been a lot of hype, but it hasn't taken off as quickly as its developers and supporters thought or hoped it would," he told TechNewsWorld.

From Cardboard to Serious VR

Google's bet is that if users like what they see through Cardboard, they might upgrade their smartphone and buy an $80 Daydream viewer, and so on.

"This is another prod in the direction of making VR more practical," noted Kay.

"The point is that they start down the path for free, or nearly so, and then get hooked on the experience and content. WebVR is that first taste of fun and engaging content, with the very explicit message that there are already even more engaging ways to view it," said Teich.

"This use of Web sources and related expanded developer support further increases the probability that their cellphone-based approach will [be] that one experience that will drive people to their platform before the high-end folks can solve their price, tether and content issues," Enderle suggested. For Google, "this represents a solid -- but not yet winning -- step towards the finish line in this race."


,

Moby, LinuxKit Kick Off New Docker Collaboration Phase

Docker this week introduced two new projects at DockerCon with an eye to helping operating system vendors, software creators and in-house tinkerers create container-native OSes and container-based systems.

The projects are based on a new model for cross-ecosystem collaboration and the advancement of containerized software. Both projects aim to help users adopt container technology for all major technology platforms used in data centers and the cloud, as well as in the Internet of Things.

The Moby Project provides a library of components, plus a framework for assembling them into custom container-based systems. It also provides a community center for container enthusiasts to experiment and exchange ideas.

LinuxKit bundles the tools to build custom Linux subsystems with just the components the runtime platform requires for Linux container functionality. It provides the Linux elements otherwise missing as a component for a container platform on non-Linux systems such as Mac and Windows computers.

The projects have a shared goal of advancing the software containerization movement and helping take containers mainstream. The two projects mark the start of Docker's next phase of container innovation, said David Messina, senior vice president of marketing at Docker.

The new projects provide a way to create, share, use and build container systems that was not possible with any open source project in the past, he said.

Moby's open source structure enables Docker to collaborate on architecture, design and experimentation with bleeding-edge features, Messina told LinuxInsider.

Docker developed LinuxKit in collaboration with Silicon partner ARMl, infrastructure providers like HPE, and cloud companies including Microsoft and IBM. Docker released LinuxKit as an open source project to be managed by The Linux Foundation under its open-governance practices.

The Moby Project and LinuxKit make customer use of Docker technology easier and more effective. The new phase of evolution is mainstream deployment, tied to the increasing specialization of use cases across all industries, said Messina.

"Both these projects are about leveraging interchangeable containerized components to create new systems," he explained.

Driving Factors

The driving interest in Docker is getting one uniform packaging format, API and tooling from dev to ops. Its promise is the ability to develop software across any language, said Messina. It also creates applications that are portable across any infrastructure in a much more agile fashion.

The new collaboration initiatives could lead to faster, simpler deployments, said Charles King, principal analyst at Pund-IT.

"Both efforts are similarly aimed at simplifying critical parts of deploying and supporting container environments," he told LinuxInsider. "Docker's decision to open source the technology and enlist notable partners -- including HPE, Intel, ARM, IBM and Microsoft -- in the effort suggest that it is on the right path."

How Moby Works

Moby allows anything that can be containerized to become a Moby component, which will generate continuing opportunities for collaboration with other projects outside of Docker.

Contributors can leverage well-tested common components to build highly specialized container systems more rapidly. With many deployments in place, contributors can differentiate on features.

The Moby library provides participants with more than 80 components derived from Docker. Participants also can bring their own components packaged as containers with the option to mix and match among all of the components to create a customized container system.

The new development phase grew out of a program Docker began building last year to develop a toolkit to assemble custom Linux subsystems. The intention was to create a more native experience for its desktop (Windows, Mac) and cloud platforms. That became the LinuxKit that provides the community with a solution to create a custom OS.

"Moby gives the same tool that Docker uses internally to build, test and package Docker software to the community, so it will accelerate innovation and help produce specialized architectures for running containers," noted Giorgio Regni, CTO at Scality.

"It also means we can use the same tools to build private VM images, bare metal images and public cloud images in a unified way," he told LinuxInsider.

That is all part of what is driving interest in the use of container technology. Developers want choice and freedom, Regni said. Containers help them achieve freedom of coding language, freedom of Linux-based distribution, freedom of runtime -- public cloud, local virtual machines, servers or even laptops.

Building From Kit

LinuxKit allows users to create secure Linux subsystems. That security is anchored around the inherent secure container design. The kit makes it easy for users to assemble the Linux subsystem with only needed services. All the components run in containers.

LinuxKit produces a minimalist boot environment to run containers, which provides a security advantage, as it creates a smaller attack surface than general purpose systems.

It also provides a read-only root file-system for an immutable infrastructure approach to deployments enabled by InfraKit.

The LinuxKit has a community-first security process. It will serve as an incubator for security-related innovations.

LinuxKit's container-native nature gives it a very minimal size of only 35 MB and a minimal boot time. Since all system services are containers, everything can be removed or replaced. This container-native approach means that it is highly portable and can work in many environments: desktop, server, IOT, mainframe, bare metal and virtualized systems.

Easier Adoption Driver

Containerization can offer a simpler, faster and more elegant way than traditional virtualization platforms to deploy and support business workloads. Customers easily can build LinuxKit images optimized for the hardware platforms and operating systems they employ, said Pund-IT's King. Those are crucial points for technically savvy organizations that depend on distributed applications.

"That said, there is not really an either/or choice between containers and virtualization," observed King. "Both can be powerful solutions for a range of processes. Both are intelligent extensions of container technology that also support Docker's business strategy."

Although it is hard to predict the viability of LinuxKit and the Moby Project at this point, King added, you can not fault Docker's ambition.


,

OPINION Why Is It OK to Abuse Customers?

I don't know about you but I can't seem to get out of my head the image of that poor Asian doctor who, seemingly unconscious, was dragged off that United flight. The fact that the airline did that to a 69-year-old doctor just so it could save money moving employees around is nearly as unbelievable as the initial tone-deaf response from United's CEO, who blamed the passenger. (It was only after a tremendous backlash that the CEO offered an actual apology.)

While the United debacle was going on, I happened to be reviewing Qualcomm's counterclaim against Apple, and holy crap. It alleges that Apple crippled the modems in some iPhones to cover up its use of cheap parts, and that it aggressively acted to prevent anyone, particularly Qualcomm, from pointing it out.

I have the view that if you pay for a thing, you should get that thing -- and Apple customers, according to Qualcomm, are getting screwed. Given how we depend on our phones, my guess is that if this is true, it won't end well for Apple.

I'll share some thoughts on customer abuse and then close with my product of the week: the Netgear Arlo Security Camera system (again).

Customer Abuse

I often wonder if top executives and boards have some weird undiagnosed disease that causes them, from time to time, to do something so incredibly stupid you have to wonder if someone snuck up on a bunch of them and hit them with a stupid stick.

I recall having a discussion with an IBM exec I reported to back in the early 1990s about the company's practice of intentionally creating buggy products and then charging customers to fix the problems it had created. I asked why we were doing something that seemed insane, only to be told, effectively, that since the customer had no choice, IBM could do what it wanted to them and they would pay whatever IBM charged.

It was like selling air. It remains one of the most idiotic responses I've ever heard, and shortly after I left the firm that entire executive team was canned. (Apparently the newly hired CEO, Louis Gerstner, agreed with my assessment.)

Microsoft had a group of executives who covered up that Office 98 wasn't backward-compatible, and a different group covered up the issues with Windows Vista that should have prevented its release. Those issues created massive problems with customers, and most of the folks responsible lost their jobs as a result.

To hit aggressive price points with lithium-ion batteries in the early 2000s, Sony covered up that they hadn't updated their lines to prevent metal contamination. The batteries became contaminated and caught fire, forcing massive recalls and pretty much wiping out Sony's lithium-ion battery business.

Those batteries could have resulted in an impressive number of deaths had one of them gone up next to a better fuel source on a plane. The lithium-ion coverup followed Sony's institution of a program to put rootkits on PCs in an attempt to combat piracy, which opened those PCs to hacking and put customers at risk. The backlash over that helped wipe out Sony's Walkman business and opened the door to the iPod.

Takata covered up that their airbags were not aging well and actually could kill drivers when they deployed. It apparently did not do anything to address the problem, which eventually was discovered and resulted in the biggest automotive recall in history. It still might put the company out of business.

It appears that Samsung cut short quality testing to get the Galaxy Note7 out quickly only to find out it was catching fire. In an effort to address that problem quickly, it guessed wrong about the cause, and replacement phones caught fire too. To recover some of the costs associated with its massive recall, Samsung decided to sell refurbished Galaxy Note7s, and I doubt that'll end well. I think Samsung has a death wish.

It really seems like an epidemic of stupid at times...

United's Disastrous Decision

There were two paths that United could have taken to move employees to another location without causing an uproar. One was to increase the voucher amount offered to passengers to a point where it was cheaper to charter a plane to move the employees, or simply to have in place what many non-airline companies use, a fleet of smaller planes for employees' use.

What is particularly scary about the method that United chose is that it didn't factor in why people weren't taking a US$1,000 voucher to change flights. Its method for choosing which passengers to bump only focuses on connections, so those who were ending up at the destination airport were prioritized for bumping.

What if someone's job depended on getting to a location on time? What if someone had a dying relative, a wedding or funeral to attend? What if someone were a doctor who needed to get to a critical patent? None of those possibilities was been taken into account, and the poor guy who was beaten up was in fact a doctor.

United's decision has cost it millions in brand damage, and because the passenger looked as though he might have been Chinese, China is treating this like a racial attack on its people, which could result in sanctions. I bet that before this is over, Congress might put a law on the books addressing it. I'd name it "The United CEO Is An Idiot Law." (By the way, PRWeek wants its award back. Suddenly this is an Oscar 2017-like event.) It may even cost the CEO his job -- all because it didn't have a better way to move employees around, which is kind of sadly ironic given it is in the transportation business.

At the end of that last linked article, the author asks why it took so long for United even to understand this was a problem. It was because, in the minds of its executives, customers had stopped being people and had become an exploitable resource instead. That attitude generally is considered a company and career killer.

Apple's Secret War on Customers

Between Apple and Samsung, I'm not sure which has the stronger tendency for suicidal policies. Apple clearly has a problem, because it is a firm that is valued largely for its innovation, and that is one word that largely has been used in the past tense since Tim Cook took over for Steve Jobs. While the iPhone has done well -- particularly this last quarter, thanks to Samsung's suicidal moves -- nothing else has risen to diversify Apple's revenue or offset a trend of increasing margin pressure. As a result, Apple has moved to a strategy of aggressively cutting costs.

That sets a foundation for the kind of problem that I mentioned earlier in this column. You see, Apple customers effectively are locked in to Apple services -- which would be OK, as long as Apple didn't see it as an opportunity to mine them, and could grow its revenue and margins by creating more and more compelling products.

However, Apple hasn't done that. The Apple Watch has languished, the iPad is in decline, and the iPad Pro has been a disappointment. MacBooks, Macs and iMacs have been cash cows for so long that reviving them seems increasingly unlikely, and is driving the company go cheap on components while considering charging more and more for iPhones.

The Qualcomm filing basically just says "Apple is an assh*le," which is far from an uncommon position from any Apple supplier. It gets interesting on page 46 of the whopping 130-page document. It alleges that Apple not only has been using sub-optimal (read cheap) parts, but also has been threatening to retaliate should anyone point that out.

Point 4 on page 46, basically says there are two iPhones in market sold as the same phone: one with cheap parts, and one with good parts but that Apple is crippling so that people can't tell the difference (and thereby avoid the bad phone). However, Apple can't cripple it enough, so people are barred from pointing out that the crippled phone is still better. WTF!?!

Here is the thing: Increasingly we live on our cellphones. We depend on them to work if there is an emergency. Our lives increasingly literally depend on them, and folks think that by buying Apple they are getting the best. However, if Qualcomm is correct, they either are getting a substandard phone -- or worse, an intentionally crippled product.

The potential consequences range from poor performance to bad connectivity, which could leave users with a phone that doesn't work when they most need it. Cutting quality while raising prices and aggressively covering that up only works temporarily. Eventually people figure it out -- and that didn't end well for IBM or for the CEO that shortly thereafter was fired.

Like all of the other examples I've cited here, Apple's alleged action is customer abuse. If it turns out to be true, then it means that the only difference between Apple and all the rest of these bad examples is that Apple has taken more money from its customers. I expect that as a reason to buy from a company, that likely falls pretty low on anyone's list.

I'll add one other element that I think is very similar to the old IBM and the new Apple. Both companies enjoyed -- and still enjoy -- phenomenal customer loyalty. Even though IBM's behavior had been going on for years, most customers seemed to give IBM the benefit of the doubt. As a result, when the problem became pronounced it went to nuclear unbelievably fast.

Certainly, it was way too fast for the existing management team to respond, and the result was a purge. It eventually saved the company, but it was a very close thing. Apple's loyalty is, if anything, greater than IBM's was -- and today's consumer market certainly can move a ton faster than enterprise computing did back in the 1980s and early 1990s.

What this means is that if this alleged anti-customer behavior is left in place too long, the backlash on Apple could be unrecoverable -- particularly if Google further reduces the migration pain to Android.

Given that many of you have huge investments in Apple, I'm suggesting you might not want to have all those eggs in that same troubled basket. Diversification may save your ass.

Wrapping Up

There are times when I wonder if boards and CEOs either are mentally challenged or suicidal. From Samsung, to United, to Apple, this year has been an increasingly ugly example of executives behaving badly.

I know I missed the chapter in management school that suggested screwing customers was a great business practice, but I seriously think those pages should be torn into little bitty pieces and tossed out, along with the idiots who adhere to this strategy.

In any case, this month has provided a strong "teachable moment." Let's hope a lot of executives learn by watching rather than doing. It is never OK to abuse customers. When companies do, they have translated "customers" into "things." We really don't like being mistreated as "things."

Rob Enderle

When I last wrote about this product, I'd installed two cameras and was impressed that the batteries had lasted a couple weeks. Well, it's been over a month, and I'm now up to 10 cameras. I've had to recharge only two batteries, both of which had more than half their battery life left even though they were in very high traffic areas, which suggests these puppies could last for months in low-traffic areas.

We've caught stray dogs wondering in our yard, the gate left open and our dogs sneaking in and out of it, delivery people who have lied about deliveries, pet sitters who weren't doing what they said they were doing, and a herd of deer wandering in to munch on our newly planted flowers. This system is AWESOME!

Netgear Arlo Pro
Netgear Arlo Pro

The Netgear Arlo is my third camera system, and it was by far the easiest to set up. The lack of wiring means I can put the cameras anyplace I want, and I can install a ton of them. My dogs and cats each have their own tracking camera, but my wife had me move the one that was on her. (That'll teach me to tell her, huh?)

I did figure out one thing: It is cheaper to buy the cameras in the bundle then one by one. You can get an Arlo system with four cameras for $350 if you shop around, while the cameras individually cost around $150.

Sadly, I didn't figure this out until after I'd purchased an additional eight cameras. Further, you get up to five cameras with the free service, but if you want to go to 10 it will set you back $99 a year. However, you then get 30 days of storage for up to 10 GB of data. For 15 cameras, it's $149 a year and you get 60 days storage for up to 100 GB.

Arlo just launched a $450 camera, and what makes it different is that it has local storage, a 3G/4G connection, and a massive battery. Sadly, this is only available to large companies or the government, and we know they would never use them to spy on you...

It has been a long time since I was this excited about a product, and that is why the Netgear Arlo is my product of the week -- again! You could call this "the iPod of security camera systems."


,

Apple May Be Getting Its Innovation Groove Back

Apple reportedly has begun testing a premium iPhone with a revamped display and body, which could be one of three new models the company is expected to launch this fall. The other two likely will be upgrades to the two existing iPhones.

The new design will incorporate curved glass and stainless steel. It will increase the surface area of the display without increasing the size of the phone, Bloomberg reported Tuesday.

"The three-phone rumor has been a consistent rumor over time," observed Kevin Krewell, a principal analyst at Tirias Research.

"That's why I believe it to be what Apple is planning," he told TechNewsWorld.

Introducing a trio of iPhones instead of the typical two makes sense, said Charles King, principal analyst at Pund-IT.

"That's especially true when you consider that this is the 10th anniversary of the iPhone, and the continuing criticism heaped on Apple for lack of innovation," he told TechNewsWorld.

The new top-of-the-line model could cost more than $1,000, according to some Apple watchers.

Bezel-less Display

Giving a smartphone a bigger screen without increasing the overall device size has some design advantages, as Apple's competitors have discovered.

"The taller, longer form factor that LG and Samsung have adopted creates an edge-to-edge display on the left and right side of the device," noted Ross Rubin, principal analyst at Reticle Research.

"There's a strong case for it from a design perspective," he told TechNewsWorld. "It allows you to get a larger diagonal screen while making the phone easier to hold."

With Samsung going to a bezel-less" display in its latest model, it's likely Apple has something along those lines in the works, suggested Tim Bajarin, president of Creative Strategies.

"It would make sense for Apple to streamline the design and give users more working space with a bezel-less screen," he told TechNewsWorld.

OLED Display

The premium iPhone will have an OLED display, Bloomberg also reported. OLED displays are brighter, more flexible, and consume less power than conventional LED screens.

"Apple would like to use OLED across the lineup, but has had trouble sourcing enough OLED screens to do so," noted Tirias' Krewell, "but OLED is the right choice for the premium model."

Use of stainless steel, although challenging, also would be a good choice for a premium model.

"Stainless steel is a material that Apple has continued to develop expertise in with its watch, so stainless may be justified in a premium edition of the phone," Reticle's Rubin said.

Stainless steel is more rigid than aluminum, which is used on the current iPhones, and harder to mill -- so it could be challenging to Apple's suppliers, Krewell noted.

"Steel also weighs more than aluminum, so it must be used more sparingly to keep the phone light," he pointed out.

Significant Camera Changes

Significant camera changes are in the works for the premium iPhone, Bloomberg also reported.

For example, Apple is experimenting with placing the dual cameras in the phone horizontally instead of vertically, as they are in the iPhone 7 Plus. The design change could result in better photos.

Apple also may add dual cameras to the front of the phone as well as to the back.

One thing Apple hasn't been able to get rid of yet, though, is the bump created by the rear-facing camera.

It's likely that Apple is going to push the camera envelope with the new iPhones, said Andreas Scherer, managing partner at Salto Partners.

"The market expects improved dual lenses and potentially augmented reality-based features as well as depth of field enhancement," he told TechNewsWorld.

"There is a plethora of photo-editing software in the App Store that allows the editing of pictures on a near professional level," Scherer added. "As a result, Apple will continue to take market share from camera manufacturers of point-and-click cameras and entry-level DLSRs."

Virtual Home Button

Another persistent rumor -- repeated in Bloomberg's report -- is the replacement of the home button at the bottom of the iPhone with a virtual button on the screen.

"It's a design necessity if you're going to create a phone with a high display to surface ratio," maintained Reticle's Rubin.

A soft home button requires careful application of fingerprint sensors in or under the OLED screen, said Tirias' Krewell. "That's very cutting-edge technology and a hard manufacturing challenge, but Apple likes to push the envelope for a cleaner look."

Can Apple hit a home run with its new 10th anniversary premium iPhone?

"It's becoming much more difficult to differentiate in the smartphone market," observed David McQueen, a research director at ABI Research.

"While Samsung has beaten Apple to the punch with many new features, it seems Apple will be adding most of them to its new lineup, too -- as well as e-SIM, support for Apple Pencil, and an enhanced version of Siri," he told TechNewsWorld.

"Apple's inability to hit a home run with past iPhone products has allowed Samsung and others to catch-up," noted Pund-IT's King.

"It's not clear to me whether the company's focus on nominal iPhone upgrades and improvements is a system issue, or suggests that we're reaching the limits of smartphone capabilities," he continued. "The upcoming iPhones should help answer that question."


,

Microsoft's Timely Response to Shadow Brokers Threat Raises Questions

Just as the Shadow Brokers hacker group started crowing about a dump of never-seen-before flaws in Windows, Microsoft announced it already had fixed most of the exploits.

Microsoft

"Today, Microsoft triaged a large release of exploits made publicly available by Shadow Brokers," Microsoft Principal Security Group Manager Phillip Misner wrote in a Friday post.

"Our engineers have investigated the disclosed exploits, and most of the exploits are already patched," he added.

Three of the dozen zero day vulnerabilities aired by the hackers, which they claimed were part of a large cache of data leaked from the U.S. National Security Agency, did not work at all on Windows 7 and above.

"Customers still running prior versions of these products are encouraged to upgrade to a supported offering," Misner recommended.

As of the most recent patch cycle, no supported versions of Windows were vulnerable to the Shadow Brokers exploits, said Bobby Kuzma, a system engineer at Core Security.

"In other words," he told TechNewsWorld, "for the love of God get XP, Vista and 2003 Server off of your networks."

Irresponsible Action

Microsoft's decision not to patch vulnerabilities affecting older versions of Windows that no longer are supported is understandable, but it doesn't make the situation less worrisome, said James Scott, a senior fellow at the Institute for Critical Infrastructure Technology.

Many systems used in homes, businesses and critical infrastructure run on versions of Microsoft's operating system prior to Windows 7.

"Microsoft's decision to knowingly put these systems at further risk, for any amount of time -- even the hours or days necessary for resource allocation and modernization -- is irresponsible," Scott told TechNewsWorld.

"Every business, individual and critical infrastructure operating an OS that precedes Windows 7 remains at risk of compromise and exploitation," he said.

What's more, "the disclosure has drastically increased this risk by making knowledge of the vulnerability and attack vector publicly available to unsophisticated script kiddies, cybercriminals, cybermercenaries, hail mary threat actors, cyberterrorists and nation-state APTs," Scott added.

Follow-Through Needed

Microsoft's release of patches and disclosure of vulnerabilities is a good thing, but enterprises need to take the process to the next step, cautioned Leo Taddeo, chief security officer at Cryptzone and a former FBI special agent.

"According to the 2016 Verizon Data Breach Investigations Report, most successful attacks exploit known vulnerabilities that have never been patched, despite patches being available for months or even years," he told TechNewsWorld.

"So, while it's important that Microsoft publicly disclosed the vulnerabilities and issued a patch," Taddeo continued, "the challenge for enterprises is to update their infrastructure with the latest supported version of the affected products."

The same is true for consumers.

"Microsoft did the right thing by patching Windows as quickly as possible and getting the patches to people," said Jack E. Gold, principal analyst at J.Gold Associates.

"Whether they deploy them or not is a different issue," he told TechNewsWorld. "Two-thirds to three-quarters of consumers don't even have up-to-date antivrus programs. If they're not concerned about that, how concerned are they going to be about these patches?"

Questions About Sourcing

Although Microsoft is usually very responsible about crediting sources who made the company aware about vulnerabilities in its products, that wasn't the case with the Shadow Brokers flaws.

That raises a number of possible scenarios, suggested Core Security's Kuzma. Perhaps Microsoft found the vulnerabilities itself -- or it may have purchased them from Shadow Brokers when the outfit put them up for sale on the Dark Web earlier this year. The Shadow Brokers may have pre-leaked the flaws to Microsoft, or perhaps the NSA passed them on to the company.

The timing of Microsoft's action raises some questions, Scott White, director of the cybersecurity program at The George Washington University, told TechNewsWorld.

"Microsoft had a ton of vulnerabilities in Windows, and it just found them a month before we were about to get a zero day attack?" he asked. "Were these patches discovered by Microsoft or was someone assisting Microsoft and letting them know of these vulnerabilities?"

Potential Threat to Everyone

The Shadow Brokers flaws likely will impact businesses more than consumers.

"The danger for consumers is limited as long as they're keeping their security updated," said Mike Cotton, vice president of research and development at Digital Defense.

"Microsoft has gotten good at ensuring that if you're behind a firewall or logging on to public WiFi, the network services that these exploits target are not exposed under most configurations," he told TechNewsWorld.

"Most of the risks are on business networks because the way they're configured those network services are exposed to these exploits, Cotton added.

As for the Shadow Brokers, their bark may be worse than their bite.

"They're an irritant more than an absolute threat to our national security compared to the Russians or Chinese -- but it doesn't make them any less criminal. They may be small fish in the pond, but they're still fish," GW's White said.

"The threat from this group derives from the fact they have some kind of source that is able to get them weaponized tools from the NSA," Cotton added.

"The NSA is a tier-one cyberpower -- maybe the preeminent cyberpower in the world," he explained, "so if there's an inside source leaking tools to Shadow Brokers, the distribution of those tools poses a large threat to everyone."


,

Apple May Transform Diabetes Care and Treatment: Report

Apple is working on a secret project to develop wearable devices that can monitor the blood sugar of diabetics without using invasive finger sticks, part of a vision that originated with company founder Steve Jobs, CNBC reported earlier this week.

Apple has assembled a team of biomedical engineers from various companies to work on the project, according to the report.

Cor, a company Apple acquired in 2010, has been working for more than five years on a way to integrate noninvasive glucose monitoring into a wearable like an Apple Watch device.

Glucose monitoring traditionally has required that diabetics use lancets to pierce their fingertips at least four times daily, measuring blood glucose levels before and after meals, when waking up, and before going to bed.

Many Type 1 diabetics wear pumps to deliver insulin, and they sometimes test up to 16 times a day.

One of the benefits of continuous glucose monitoring, or CGM, is to warn diabetics when blood glucose is rising or falling rapidly. Hypoglycemia can result when glucose levels fall below 70 milligrams per deciliter. Hyperglycemia, a rapid rise of blood glucose, can lead to ketoacidosis or, worse, a diabetic coma.

Apple reportedly has begun feasibility studies in the Bay Area and has retained consultants to help with regulatory issues.

Lucrative Market

Managing diabetes has been a rapidly evolving focus in both the smartphone and wearable device markets.

Integrating glucose sensors into wearable devices is a difficult challenge, but the market potential is very great, noted Jitesh Ubrani, senior research analyst for WW mobile device trackers at IDC.

"Today most wearables are consumer products, but there's an untapped opportunity in the medical community," he told TechNewsWorld.

The overall diabetic testing market will reach US$17 billion in 2021, up from $12 billion in 2016, ABI has estimated. Revenue from CGM devices is expected to increase at a compound annual growth rate of 41 percent. More than 9 million wearable CGM devices are expected to ship by 2021, and wearable device makers like Apple and other competitors want to get a bite of that market.

Existing Products

Apple has an existing partnership with Dexcom, which offers the G5 continuous glucose monitor. It allows Apple Watch users to monitor their blood sugar using an app that syncs with iPhone.

Many CGM devices have accuracy issues and require frequent calibration with blood glucose sensors. However, regulators last year found the G5 readings were accurate enough to require calibration just once every 12 hours.

Abbott's Libre blood testing system has a sensor worn on the upper arm up to 14 days at a time, which measures interstitial fluid under the skin every minute. The system originated in 2014 in Europe and now is available in 30 countries worldwide; it is under review by the U.S. Food and Drug Administration.

Medtronic, one of the leading makers of diabetic insulin pumps and continuous glucose sensors in the U.S. last year received approval in Europe, Latin America and Australia for its Guardian Connect device, according to company spokesperson Danielle Swanson. The device currently is under review by the U.S. Food and Drug Administration.

"CGM is extremely valuable to people with diabetes, as it allows them to see current glucose levels at any time on their phone without a fingerstick," Swanson told TechNewsWorld. "People with diabetes can also receive alerts to help them avoid high and low blood glucose levels."

The app, which is being rolled out on a country by country basis, initially was made for the iOS system. Medtronic will release an Android version at a later date.

Under a partnership announced last year, Medtronic will offer glucose tracking on Fitbit's iPro2 mLog mobile app, using data from Medtronic's iPro2 professional CGM system, to give patients a record of their exercise and blood glucose levels. The app is available for both iOS and Android mobile devices.

UK-based Nemaura Medical has developed the sugarBEAT system, a glucose monitoring device that uses a noninvasive, disposable patch. The 10mm coin-sized device measures molecular amounts of interstitial fluid using an electric current .

The same technology is being developed for a variety of other uses, such as taking measurements related to athletic performance and measuring oxygen depletion in patients. The patch would last up to 14 days before needing to be changed.

The company signed a letter of intent with a China-based Shenzen CAS Health Corp. in a joint venture to manufacture and distribute the sensors in China, pending regulatory approval.

"We have designed sugarBEAT around the needs of the user," Nemaura Medical Director Bashir Timol told TechNewsWorld. "Accordingly, the primary interface is the app accessible on users' existing smart device."


,

Friday, April 14, 2017

Shuttleworth Gives Up Hope for Convergence Breakthrough

Canonical's long and winding quest for a unified user experience came to a sudden halt on Wednesday, as founder Mark Shuttleworth announced the firm's decision to stop investing in its struggling Unity8 shell and revert to Gnome for its Ubuntu 18.04 LTS desktop OS release.

Shuttleworth Gives Up Hope for Convergence Breakthrough

The 6-year-old Unity plan was to create a user interface that could work on various types of devices, ranging from a mobile phone to a personal computer or tablet.

The project had been the subject of rampant speculation over the past couple of years, as public updates were scarce.

There was speculation that the company's own resources were insufficient to carry the project where it needed to go -- yet there seemed to be a reluctance to collaborate.

Canonical remains committed to the Ubuntu desktop that millions already rely on, Shuttleworth said, and it will continue to produce what is touts as the most usable open source desktop in the world, maintaining existing LTS releases and working with commercial partners to deliver the desktop.

He extended that commitment to corporate users who rely on the desktop and millions of IoT and cloud developers who use it to innovate.

Shuttleworth believed that convergence was the future and that a converged product would appeal to the free software community, but "I was wrong on both counts," he said. "In the community, our efforts were seen [as] fragmentation not innovation. And industry has not rallied to the possibility, instead taking a 'better the devil you know' approach to those form factors, or investing in home-grown platforms."

Focus on Future

The plan now is to invest in areas that will contribute to the company's growth, said Shuttleworth, including Ubuntu for desktops, servers and virtual machines; cloud infrastructure products, including OpenStack and Kubernetes; and cloud operations, including Maas, LXD, Boostack, Juju, and IoT in snaps and Ubuntu Core.

"All of those have communities, customers, revenue and growth, the ingredients for a great and independent company, with scale and momentum," he said.

Hard Lessons

"I think the decision is an admission that user experience is very much tied to hardware and specific usage models," observed Paul Teich, principal analyst at Tirias Research.

With "no direct connection to hardware," the open source user interface "will always lag in delivering new UX features and in taking advantage of new hardware capabilities," he told LinuxInsider.

Google would have fallen behind in mobile without Pixel and previous Nexus phones and tablets, Teich said, noting that Apple "uses vertical integration of both their own OS and hardware to pioneer new user experiences."

The same arguments can used to justify Google's Chromebooks, which are driving the non-PC user experience for clamshell form factors, he suggested.

"This is probably the biggest impediment to Linux desktop UX development," Teich added. "Google is working closely with hardware manufacturers to provide a better, integrated experience between Chrome and the underlying hardware. A software only effort can't even get close and this is the core of why Mark wisely let go of Unity8."

The move reflects a recognition that the mobile space is crowded, said Al Gillen, group vice president for software development and open source at IDC.

Shuttleworth's aim, Gillen told LinuxInsider, is to reallocate resources into a space where Canonical and Ubuntu "can gain more traction and make a difference."


,