Technology

Sunday, April 30, 2017

Duty Calls Popular Shooter Back to World War II

2:33 AM

Call of Duty will return to its World War II roots when the latest title arrives this November, publisher Activision Blizzard announced on Wednesday. The franchise has been a steady hit maker since its 2003 debut. Various titles in the series have become regular staples in e-sports tournaments and Major League Gaming.

Call of Duty is one of the most popular series in video game history, selling a combined 175 million copies -- second only to Take 2's Grand Theft Auto franchise. The game has passed US$11 billion in total lifetime revenue.

Although the series has been a robust seller for Activision Blizzard, the critical response to recent titles hasn't been as strong. In Call of Duty: WWII, the game once again set its sights on the military exploits of the Greatest Generation, including the D-Day landings.

The upcoming game will feature both a single player campaign, which will allow players to take the role of various characters at different points in the war, and an online multiplayer mode. The campaign will take place largely in the latter stages of the war, as the Allies liberate Nazi-occupied Europe.

Based on what has been announced so far, it seems that players will assume the role of Private Ronald "Red" Daniels of the American 1st Infantry Division. They can opt to fill the shoes of a British soldier as well, and even play as a member of the French resistance.

Sledgehammer Games, an independent but wholly owned subsidiary of Activision, is the developer of Call of Duty: WWII. The studio codeveloped Call of Duty: Modern Warfare 3 with Infinity Ward, and it also developed Call of Duty: Advanced Warfare.

The game has not yet been rated.

Every Bullet Counts

Sledgehammer Games has promised that Call of Duty: WWII will be different from other shooters, as it attempts to reimagine the traditional first-person shooter model. Notably, the game will have a darker tone that doen't shy away from the graphic horrors of war, and it apparently will ditch one of the genre's longstanding features, namely the health-regeneration systems.

This means every bullet could have the character's name on it, and while technically it won't mean "game over," it should change the gaming experience.

"This is going to be a more hardcore game, where the skill curve is greater and the challenges harder," said Roger Entner, principal analyst at Recon Analytics.

"The graphics will also be more realistic, and that could make the violence seem even more intense," he told TechNewsWorld.

"However, we have to realize that this is a title aimed at mature gamers, and today the average age for those gamers is in their 30s," Entner added. "This isn't a game for kids, but gamers are more than kids today."

Return to Duty

The original Call of Duty, which came on the heels of Electronic Arts' Medal of Honor and Battlefield 1942 titles, scored a direct hit when it launched in 2003. Gamers couldn't seem to get enough World War II action.

"The original Call of Duty was certainly helped by those other titles -- but really by the number of World War II movies that brought the conflict to the mainstream again," said Entner. "It was the beginning of the war on terror, and the game combined elements of heroism and patriotism, and people wanted to play out the exploits of these heroes."

As the very real war on terror drags on without an end in sight, the modern conflict setting may have run its course, but the question is whether today's gamers will be interested in returning to World War II again.

Call of Duty won't be the first franchise to jump back to an earlier era. EA's Battlefield 1 last fall proved gamers may have interest in period pieces. That game was one of the top sellers of last year, suggesting that fans may be ready for something different -- but there are still risks.

"Games, like movies, have a huge upfront cost, and this is why we see so many sequels," said Entner. "At the same time, the sequels need to be more than just more of the same."

History vs. Gaming

It's possible some people with an interest in history will be drawn back to games, thanks to this World War II setting.

"I did witness a spurt of WWII awareness during the original Call of Duty's popularity that I would attribute directly to the game," said John Adams-Graf, editor of Military Trader, a publication devoted to military memorabilia and historical re-enactors.

"While its release was about the same time as the epic WWII film, Saving Private Ryan, followed by the HBO series, Band of Brothers, the level of interest in WWII small arms that I witnessed at that time probably had more to do with the game than with Hollywood," Adams-Graf told TechNewsWorld.

The game could heighten awareness of the historical event as well.

"The depth of detail and in-game resources will give any gamer immediate access to more information about WWII, the men and women who served, and the weapons that were used," he added.

Total War

Even though this game promises greater realism, thanks to improved graphics and gameplay, Activision and developer Sledgehammer Games may be ensuring that it has something for everybody.

Few details were provided, but based on the trailer and the official website, Call of Duty: WWII could include the popular "zombie" cooperative mode. This has been a regular staple of the series since 2008, with the release of Call of Duty: World at War, appearing in the subsequent Call of Duty: Black Ops titles.

"This helps give the game a low barrier of entry, and promises to have something for different tastes," noted Recon Analytics' Entner.

"This is for those who are in a beer-and-pretzels mood and want silly fun," he said. "It is a way to get away from the realism that the other half of the game offers."


OPINION Our Sci-Fi Future: Silly vs. Terrifying

2:33 AM

The future is now, or at least it is coming soon. Today's technological developments are looking very much like what once was the domain of science fiction. Maybe we don't have domed cities and flying cars, but we do have buildings that reach to the heavens, and drones that soon could deliver our packages. Who needs a flying car when the self-driving car -- though still on the ground -- is just down the road?

The media often notes the comparisons of technological advances to science fiction, and the go-to examples cited are often Star Trek, The Jetsons and various 1980s and 90s cyberpunk novels and similar dark fiction. In many cases, this is because many tech advances actually are fairly easy comparisons to what those works of fictions presented.

On the other hand, they tend to be really lazy comparisons. Every advance in holographic technology should not immediately evoke Star Trek's holodeck, and every servant-styled robot should not immediately be compared to Rosie, the maid-robot in The Jetsons.

Cool the Jetsons

The Jetsons is one of the cultural references cited most often when new technology emerges. Perhaps this is because so many of today's tech reporters viewed the shows in syndication or reruns -- or perhaps it's just because the comparisons are so easy to make.

Comedian John Oliver mocked TV news programs for their frequent references to The Jetsons when reporting developments in flying cars, video phones and other consumer technologies. Yet, the Smithsonian magazine several years ago published a piece titled "50 Years of the Jetsons: Why The Show Still Matters," casting it as part of the golden age of futurism.

Likewise, Diply.com noted "11 Times The Jetsons Totally Predicted The Future," while AdvertisingAge asked, "How Much of the Jetsons' Futuristic World Has Become a Reality?"

First, the show shouldn't matter. It was a cartoon and predicted the future about as accurately as The Flintstones depicted life in the Stone Age. None of The Jetsons' future tech predictions were revolutionary.

Flying cars, flat panel TVs, video phones and robot maids often are called out -- yet all of those innovations were envisioned decades before the TV show hit the airwaves. Science fiction writer H.G. Wells suggested we might have small personal aircraft in his work The Shape of Things to Come -- and we're not much closer to having them now than when he came up with the idea in 1933.

Flat panel TVs, on the other hand, were in development in the early 1960s around the same time the show was on the air, and were envisioned much earlier.

As for video phones, the concept of videotelephony initially became popular in the late 1870s, and Nazi Germany developed a working system in the 1930s. Of course, it is likely more uplifting to compare today's modern tech to The Jetsons than to give the Nazis credit.

As for the robot maid, let's not forgot that "robots" were introduced in the 1920 play Rossum's Universal Robots, by Czech writer Karel ÄŒapek, which told of a class of specially created servants. They may not technically have been machines, but they were there to do the jobs people didn't want to do.

Boldly Going the Same Place

Star Trek is another often-referenced show whenever the next cool technology looks anything like its holodeck or replicator technology.

Perhaps this is because it is easy to compare today's 3D printers to replicators, even if the principles underlying them are entirely different. It is equally easy -- lazy, actually -- to suggest that holographic technology eventually could -- one day in the far future -- create a holodeck like the one on Star Trek.

The problem is that beneath the surface, much of the technology seen on the show is different from our current technology and really borders on the impossible. The replicator breaks things down not to the molecular level but to the atomic level.

That is as unlikely ever to become reality as the show's transporter technology -- but that didn't stop the comparisons to the show's magic means of "beaming" when German scientists created a system of scanning an object and recreating it elsewhere.

That isn't quite the same. The German invention basically combines 3D printing with a fax machine. That is hardly deserving of a "Beam me up, Scotty" reference -- even though hardcore fans will happily tell you that exact phrase never has been uttered in any of the TV shows or movies.

Snow Crash and Burn

Many of today's Internet-based technologies -- including 3D and virtual reality, as well as various online worlds -- are compared to similar tech in the works of William Gibson, Bruce Sterling and Neal Stephenson.

In the past 20 years, there have been many suggestions that various worlds that have emerged online -- from Worlds Inc. in 1995 to Second Life in 2005 -- were close to a "real life Snow Crash," the book that introduced readers to the "Metaverse."

The Metaverse is an online world populated with 3D avatars who can mingle, chat and engage with one another while their real-life users are in distant places around the globe.

Yes, that does sound much like today's online video games and the virtual world Second Life. The amazing part in these comparisons isn't that as a science fiction writer and visionary, Stephenson was so right about where we were headed -- but rather than anyone would gleefully laud the fact that we're closing in on that reality!

The worlds presented in Snow Crash and the works of Gibson and Sterling aren't exactly utopias. These guys didn't write stories set in perfect worlds free of crime, disease and war. Rather, they are worlds where criminals and giant corporations work hand in hand, as governments have fallen. People are desperate and downtrodden, and they escape to virtual realities because their real lives are awful.

Case, the main character in Gibson's seminal book Neuromancer, is a "console cowboy" -- basically a hacker by another, more creative name. The character is down on his luck, and by taking on a job no one wants, he is able to get his life back on track and get out of the game. Case's story makes for fine reading -- but he's an antihero at best.

He's a criminal who works for bigger criminals. Do we really want to live in such a world -- where a few powerful criminals and corporations rule the world?

When executives at Facebook's Oculus VR cite Snow Crash and the Metaverse, one must wonder what part of the vision of the future they're excited about. It would be akin to developing a robot with true artificial intelligence and proclaiming "it's just like the Terminator!

This might change if a film version of Snow Crash ever materializes -- but, for the record, I'm happy that attempts to bring Neuromancer to the big screen have failed. We really don't need another Gibson story to suffer the fate of Johnny Mnemonic.

Brave New World

Cyberpunk stories typically are set in worlds in which corporations control innovation, launch satellites to connect the world to spread information, and maintain private armies. That vision is a lot darker than The Jetsons and Star Trek, of course, but it also is much closer to our reality.

Are there now giant corporations that literally control how we access the Internet? Or others that have what appear to be bizarre projects to further spread their reach to remote parts of the world? Doesn't that pretty much describe tech giants Comcast and Facebook? Or Google?

Now imagine a tech visionary who is a billionaire with a private space program, who also is developing new ways to harness solar power. There have been James Bond books and movies with such villains (see Moonraker and The Man With the Golden Gun) -- but that description also fits Elon Musk. By no means am I suggesting that Musk is a Bond villain, but I'm not the only one to notice those similarities.

Then there is the fact that Musk actually bought the Lotus Esprit submarine car that was used in the film The Spy Who Loved Me for nearly US$1million. Now tell me, that isn't something a Bond villain would do?

The big takeaway is that perhaps it is too easy to see today's world in past entertainment vehicles and ignore the fact that much of what The Jetsons predicted was completely wrong. We don't expect to see a flying car that can fold up into a briefcase, ever.

Likewise, teleportation and warp speed likely will remain just part of the mythos of Star Trek.

When it comes to the darker side of science fiction, we should be cautious about where we are headed. We should heed it as a portent of what to avoid -- not a future we should embrace.

As for Musk, let's just hope a super spy is on standby.


HOW TO Linux's Big Bang: One Kernel, Countless Distros

2:33 AM

Even if you're a newcomer to Linux, you've probably figured out that it is not a single, monolithic operating system, but a constellation of projects. The different "stars" in this constellation take the form of "distributions," or "distros." Each offers its own take on the Linux model.

To gain an appreciation of the plethora of options offered by the range of distributions, it helps to understand how Linux started out and subsequently proliferated. With that in mind, here's a brief introduction to Linux's history.

Linus Torvalds, Kernel Builder

Most people with any familiarity with Linux have heard of its creator, Linus Torvalds (pictured above), but not many know why he created it in the first place. In 1991, Torvalds was a university student in Finland studying computers. As an independent personal project, he wanted to create a Unix-like kernel to build a system for his unique hardware.

The "kernel" is the part of an operating system that mediates between the hardware, via its firmware, and the OS. Essentially, it is the heart of the system. Developing a kernel is no small feat, but Torvalds was eager for the challenge and found he had a rare knack for it.

As he was new to kernels, he wanted input from others to ensure he was on the right track, so he solicited the experience of veteran tinkerers on Usenet, the foremost among early Internet forums, by publishing the code for his kernel. Contributions flooded in.

After establishing a process for reviewing forum-submitted patches and selectively integrating them, Torvalds realized he had amassed an informal development team. It quickly became a somewhat formal development team once the project took off.

Richard Stallman's Role

Though Torvalds and his team created the Linux kernel, there would have been no subsequent spread of myriad Linux distributions without the work of Richard Stallman, who had launched the free software movement a decade earlier.

Frustrated with the lack of transparency in many core Unix programming and system utilities, Stallman had decided to write his own -- and to share the source code freely with anyone who wanted it and also was committed to openness. He created a considerable body of core programs, collectively dubbed the "GNU Project," which he launched in 1983.

Without them, a kernel would not have been of much use. Early designers of Linux-based OSes readily incorporated the GNU tools into their projects.

Different teams began to emerge -- each with its own philosophy regarding computing functions and architecture. They combined the Linux kernel, GNU utilities, and their own original software, and "distributed" variants of the Linux operating system.

Server Distros

Each distro has its own design logic and purpose, but to appreciate their nuances it pays to understand the difference between upstream and downstream developers. An "upstream developer" is responsible for actually creating the program and releasing it for individual download, or for including it in other projects. By contrast, a "downstream developer," or "package maintainer," is one who takes each release of the upstream program and tweaks it to fit the use case of a downstream project.

While most Linux distributions include some original projects, the majority of distribution development is "downstream" work on the Linux kernel, GNU tools, and the vast ecosystem of user programs.

Many distros make their mark by optimizing for specific use cases. For instance, some projects are designed to run as servers. Distributions tailored for deployment as servers often will shy away from quickly pushing out the latest features from upstream projects in favor of releasing a thoroughly tested, stable base of essential software that system administrators can depend on to run smoothly.

Development teams for server-focused distros often are large and are staffed with veteran programmers who can provide years of support for each release.

Desktop Distros

There is also a wide array of distributions meant to run as user desktops. In fact, some of the more well-known of these are designed to compete with major commercial OSes by offering a simple installation and intuitive interface. These distributions usually include enormous software repositories containing every user program imaginable, so that users can make their systems their own.

As usability is key, they are likely to devote a large segment of their staff to creating a signature, distro-specific desktop, or to tweaking existing desktops to fit their design philosophy. User-focused distributions tend to speed up the downstream development timetable a bit to offer their users new features in a timely fashion.

"Rolling release" projects -- a subset of desktop distributions -- are crafted to be on the bleeding edge. Instead of waiting until all the desired upstream programs reach a certain point of development and then integrating them into a single release, package maintainers for rolling release projects release a new version of each upstream program separately, once they finish tweaking it.

One advantage to this approach is security, as critical patches will be available faster than non-rolling release distros. Another upside is the immediate availability of new features that users otherwise would have to wait for. The drawback for rolling release distributions is that they require more manual intervention and careful maintenance, as certain upgrades can conflict with others, breaking a system.

Embedded Systems

Yet another class of Linux distros is known as "embedded systems," which are extremely trimmed down (compared to server and desktop distros) to fit particular use cases.

We often forget that anything that connects to the Internet or is more complex than a simple calculator is a computer, and computers need operating systems. Because Linux is free and highly modular, it's usually the one that hardware manufacturers choose.

In the vast majority of cases, if you see a smart TV, an Internet-connected camera, or even a car, you're looking at a Linux device. Practically every smartphone that's not an iPhone runs a specialized variety of embedded Linux too.

Linux Live

Finally, there are certain Linux distros that aren't meant to be installed permanently in a computer, but instead reside on a USB stick and allow other computers to boot them up without touching the computer's onboard hard drive.

These "live" systems can be optimized to perform a number of tasks, ranging from repairing damaged systems, to conducting security evaluations, to browsing the Internet with high security.

As these live Linux distros usually are meant for tackling very specific problems, they generally include specialized tools like hard drive analysis and recovery programs, network monitoring applications, and encryption tools. They also keep a light footprint so they can be booted up quickly.

How Do You Choose?

This is by no means an exhaustive list of Linux distribution types, but it should give you an idea of the scope and variety of the Linux ecosystem.

Within each category, there are many choices, so how do you choose the one that might best suit your needs?

One way is to experiment. It is very common in the Linux community to go back and forth between distros to try them out, or for users to run different ones on different machines, according to their needs.

In a future post, I'll showcase a few examples of each type of distribution so you can try them for yourself and begin your journey to discovering the one you like best.


Mobile Ubuntu Gamble to Fizzle Out in June

2:33 AM

Canonical this week said that it will end its support for Ubuntu Touch phones and Ubuntu-powered tablets in June, and that it will shut down its app store at the end of this year. The company previously had signaled the system's demise, but it had not fixed a date.

With Ubuntu Touch, a unified mobile OS based on Ubuntu Linux, Canonical hoped to establish a marketable alternative to the Android and iOS platforms.

Unity Out

The news came on the heels of Canonical founder Mark Shuttleworth's recent disclosure that the company had decided to drop Ubuntu's Unity desktop environment as of the next major distro release. That revelation hinted that more shedding of unsuccessful technologies would follow.

Canonical will replace its shelved flagship desktop environment with GNOME 3. The Unity decision was based, at least in part, on continuing difficulties in moving it to the Mir X-window server system. That combination was integral to the Ubuntu phone and tablet development.

"Ubuntu phones and tablets will continue to operate. OTA (over-the-air) updates are currently limited to critical fixes and security patches only," said Canonical spokesperson Sarah Dickson.

The critical patching will continue until June 2017, she told LinuxInsider. Then no further updates will be delivered.

Risky Business

Canonical declined to provide details about what led to the end of the Ubuntu phone and tablet technology. However, one obvious reason is the low number of users.

"This is one of those 'What if you gave a party and nobody came?' scenarios," said Charles King, principal analyst at Pund-IT.

It is no doubt disappointing for Canonical, but it also calls into question the viability of any mobile OS that does not have a sizable following or a deep-pocketed vendor behind it, he told LinuxInsider.

"For historical analogies, the early days of the PC business before the market coalesced around Windows and the Mac OS offer some good examples," King observed.

Exit Strategy

For the time being, the app store will continue to function, said Dickson. That means users will be able to continue to download apps, and developers can continue to push updates and bug features to existing apps.

"However, it will no longer be possible to purchase apps from the Ubuntu Phone app store from June 2017," she said.

After that date, developers of paid apps already in the store will have two choices: make their apps available for free, or remove them from the store.


Red Hat Gives JBoss AMQ a Makeover

2:33 AM

Red Hat on Thursday announced JBoss AMQ 7, a messaging platform upgrade that enhances its overall performance and improves client availability for developers.

JBoss AMQ is a lightweight, standards-based open source platform designed to enable real-time communication between applications, services, devices and the Internet of Things. It is based on the upstream Apache ActiveMQ and Apache Qpid community projects.

JBoss AMQ serves as the messaging foundation for Red Hat JBoss Fuse. It provides real-time, distributed messaging capabilities needed to support an agile integration approach for modern application development, according to Red Hat.

The upgrade be available for download by members of the Red Hat developers community this summer, the company said.

Technology plays a central role in enabling greater levels of interconnection and scalability, noted Mike Piech, general manager of Red Hat JBoss Middleware.

"With its new reactive messaging architecture, JBoss AMQ is well-suited to support the applications and infrastructure that can help deliver those experiences," he said.

What It Does

"Messaging," in this case, relates to commands passing between servers and client devices. Broadly speaking, this kind of technology is called "message-oriented middleware," or "MOM," said Charles King, principal analyst at Pund-IT.

"This is purely aimed at easing the lives of enterprise developers whose organizations are Red Hat customers," he told LinuxInsider. "This is especially true for those focused on IoT implementations, since AMQ 7 can open up data in embedded devices used in IoT sensors to inspection, analysis and control."

Red Hat's AMQ 7 broker component is based on the Apache ActiveMQ Artemis project. The update adds support for APIs and protocols by adding new client libraries, including Java Message Service 2.0, JavaScript, C++, .Net and Python.

Upgrade Details

JBoss AMQ 7 brings technology enhancements to three core components: broker, client and interconnect router.

The JBoss AMQ broker manages connections, queues, topics and subscriptions. It uses innovations from Artemis to provide an asynchronous internal architecture. This increases performance and scalability and enables the system to handle more concurrent connections and achieve greater message throughput.

JBoss AMQ 7 Client supports popular messaging APIs and protocols by adding new client libraries. These include Java Message Service 2.0, JavaScript, C++, .Net and Python, along with existing support for the popular open protocols MQTT and AMQP.

JBoss AMQ 7 now offers broad interoperability across the IT landscape, which lets it open up data in embedded devices to inspection, analysis and control.

The new interconnect router in JBoss AMQ 7 enables users to create a network of messaging paths spanning data centers, cloud services and geographic zones. This component serves as the backbone for distributed messaging. It provides redundancy, traffic optimization, and more secure and reliable connectivity.

Key Improvements

Two key enhancements in JBoss AMQ 7 are a new reactive implementation and interconnect features, said David Codelli, product marketing lead for JBoss AMQ at Red Hat.

The first is an asynchronous (reactive) server component, so it no longer ties up resources for each idle connection. This will make all their interactions with messaging more performant, both in terms of throughput and latency.

"It also means that more developers can use the system since some asynchronous threading models can't use traditional brokers," Codelli told LinuxInsider.

The second key enhancement is the addition of numerous benefits provided by the new interconnect feature, especially the ability to use one local service.

"Developers can connect to one local service to have access to a stream -- sending or receiving -- of data that can travel throughout their organization," Codelli said, "crossing security boundaries, geographic boundaries and public clouds without the developer having to configure anything."

Targeted Users

The messaging platform will better meet the needs of engineering teams who share data in a loosely coupled, asynchronous fashion. Similarly, operations teams will be better able to share feeds of critical data with every department in a company, even if it has a global reach.

"Think giant retailers that need to propagate product defect data as quickly and efficiently as possible," suggested Codelli.

Two more intended benefactors of the application upgrade are financial services software developers, or IT managers dealing with transportation and logistics tasks.

The interconnect feature is ideal for things like payment gateways. It is also more helpful to architects who have to contend with large amounts of disparate data, such as delays and outages, flowing over a wide area.

Cross-Platform Appeal

One of the appeals of MOM technologies and Apache ActiveMQ is that its message modules can be distributed across multiple heterogeneous platforms, Pund-IT's King said. It effectively insulates applications developers from having to be experts in enterprise operating systems and network interfaces.

"A cross-platform value proposition is central to what Red Hat is doing," King observed. "Overall, this new update to AMQ 7 is likely to be welcomed by numerous customers, especially those focused on distributed applications and IoT."


Saturday, April 22, 2017

Facebook's Latest Moon Shot: I Think, Therefore I Type

2:32 AM

Facebook on Wednesday told its F8 conference audience about two new cutting-edge projects that could change the way humans engage with devices.

Over the next two years, the company will work on a new technology that will allow anyone to type around 100 words per minute -- not with fingers, but using a process that would decode neural activity devoted to speech.

What Facebook envisions is a technology that would resemble a neural network, allowing users to share thoughts the way they share photos today.

This technology also could function as a speech prosthetic for people with communication disorders or as a new way to engage in an augmented reality environment, suggested Regina Dugan, vice president of engineering at Facebook's Building8.

The other project announced at F8 would change the way users experience communication input -- that is, it would allow them to "hear" through the skin. The human body has, on average, about two square meters of skin that is packed with sensors. New technologies could use them to enable individuals to receive information via a "haptic vocabulary," Facebook said.

Putting the Best Interface Forward

About 60 employees currently are working on the Building8 projects -- among them machine learning and neural prosthetic experts. Facebook likely will expand this team, adding experts with experience in brain-computer interfaces and neural imaging.

The plan is to develop noninvasive sensors that can measure brain activity and decode the signals that are associated with language in real time.

The technology would have advance considerably before individuals would be able to share pure thoughts or feelings, but the current effort can be viewed as the first step toward that goal, said Facebook CEO Mark Zuckerberg.

Type With Your Brain

Facebook is not the only company working on technology that could allow a direct brain-computer connection. Elon Musk last month launched Neuralink, a startup dedicated to developing a "neural lace" technology that could allow the implanting of small electrodes into the brain.

One difference between Facebook's "type with your brain" concept and Musk's neural mesh project is that "Facebook wants it to be noninvasive," said futurist Michael Rogers.

"That's a tough one, because the electrical impulses in the brain are very small and the skull is not a great conductor," he told TechNewsWorld.

The nerve signals outside the skull that trigger muscles are much stronger, so reading brain waves noninvasively means filtering a lot of much-stronger noise.

"The existing 'neural cap' technologies that allegedly let wearers, say, learn to control computer games with their brains, are actually probably training them to -- unconsciously -- use their eyebrow and forehead muscles," suggested Rogers. "Interesting, but not the same as typing with your brain."

Hearing Through Touch

Just as Braille has allowed the blind to read, Facebook's technology to hear through the skin could be a game changer for those who are deaf.

"Sound through the skin sounds more like a tool for the profoundly deaf than for everyday use," observed Rogers -- and it might not be the best solution for those with such hearing problems.

"I'm still a big believer in the potential for bone conduction temples on smart glasses as a really good audio solution that doesn't require earbuds," he added.

Delivering information through this technology may be less challenging than processing it.

"It is technically easy to translate the data to a speech pattern, but the hard part is training the human to understand it," said Paul Teich, principal analyst at Tirias Research.

"Facebook can simplify the words to codes, but you are still stuck with the training to understand the code," he told TechNewsWorld.

"There is no way to immediately understand what is being sent, and this isn't very intuitive as I see it," added Teich. "With the right training, however, a coded message could be understood in real time."

Facebook's Timeline

Zuckerberg emphasized that Facebook is taking the first steps toward development of these technologies, and it could be a lot time before they have practical applications.

"Having bold visions and making ambitious predictions characterizes some of today's most regarded tech entrepreneurs," said Pascal Kaufmann, CEO of Starmind.

"There seem to be no limits, and one can easily fall prey to the belief that everything is possible," he said.

"However, climbing a high tree is not the first step to the moon -- actually, it is the end of the journey," Kaufman told TechNewsWorld.

"Despite some gradual improvements in speech recognition these days, understanding of context and meaning is still far beyond our technological capabilities. Taking a shortcut through directly interfacing human brains, and circumventing the highly complex translation from nerve signals into speech and back from speech recognition into nerve signals, I consider one of the more creative contributions in the last few months," he said.

"It is certainly an alternative to the brute force approaches and the unjustified AI hype that seems like climbing a tree rather than building a space rocket. Zuckerberg's announcement describes a space rocket; it is up to us now to develop the technology to aim for the moon," said Kaufman.

"Both of these technologies are well within reach. This kind of brain imaging has been used in academic settings for years, and scientists developed versions of 'skin hearing' devices over 30 years ago, " said Michael Merzenich, designer of BrainHQ, professor emeritus in the UCSF School of Medicine, and winner of the 2016 Kavli Prize.

"The real challenge is making this science practical. How can a brain be trained to learn to control these new devices?" he asked.

"Every new form of communication -- from writing to the telephone to the Web -- changes human culture and human brains," Merzenich told TechNewsWorld. "What impact will using these new devices have on our brains -- and our culture?"


Report: Commercial Software Riddled With Open Source Code Flaws

2:32 AM

Black Duck Software on Wednesday released its 2017 Open Source Security and Risk Analysis, detailing significant cross-industry risks related to open source vulnerabilities and license compliance challenges.

Black Duck conducted audits of more than 1,071 open source applications for the study last year. There are widespread weaknesses in addressing open source security vulnerability risks across key industries, the audits show.

Open source security vulnerabilities pose the highest risk to e-commerce and financial technologies, according to Black Duck's report.

Open source use is ubiquitous worldwide. An estimated 80 percent to 90 percent of the code in today's software applications is open source, noted Black Duck CEO Lou Shipley.

Open source lowers dev costs, accelerates innovation, and speeds time to market. However, there is a troubling level of ineffectiveness in addressing risks related to open source security vulnerabilities, he said.

"From the security side, 96 percent of the applications are using open source," noted Mike Pittenger, vice president for security strategy at Black Duck Software.

"The other big change we see is more open source is bundled into commercial software," he told LinuxInsider.

The open source audit findings should be alarming to security executives. The application layer is a primary target for hackers. Thus, open source exploits are the biggest application security risk that most companies have, said Shipley.

Understanding the Report

The report's title, "2017 Open Source Security and Risk Analysis," may be a bit misleading. It is not an isolated look at open source software. Rather, it is an integrated assessment of open source code that coexists with proprietary code in software applications.

"The report deals exclusively with commercial products," said Pittenger. "We think it skews the results a little bit, in that it is a lagging indicator of how open source is used. In some cases, the software was developed within three, five or 10 years ago."

The report provides an in-depth look at the state of open source security, compliance, and code-quality risk in commercial software. It examines findings from the anonymized data of more than 1,000 commercial applications audited in 2016.

Black Duck's previous open source vulnerability report was based on audits involving only a few hundred commercial applications, compared to the 1,071 software applications audited for the current study.

"The second round of audits shows an improving situation for how open source is handled. The age of the vulnerabilities last year was over five years on average. This year, that age of vulnerability factor came down to four years. Still, that is a pretty big improvement over last year," Pittenger said.

Awareness Improving

Through its research, Black Duch aims to help development teams better understand the open source security and license risk landscape. Its report includes recommendations to help organizations lessen their security and legal risks.

"There is increased awareness. More people are aware that they have to start tracking vulnerabilities and what is in their software," said Pittenger.

Black Duck conducts hundreds of open source code audits annually that target merger and acquisition transactions. Its Center for Open Source Research and Innovation (COSRI) revealed both high levels of open source use and significant risk from open source security vulnerabilities.

Ninety-six percent of the analyzed commercial applications contained open source code, and more than 60 percent contained open source security vulnerabilities, the report shows.

All of the targeted software categories were shown to be vulnerable to security flaws.

For instance, the audit results of applications from the financial industry averaged 52 open source vulnerabilities per application, and 60 percent of the applications were found to have high-risk vulnerabilities.

The audit disclosed even worse security risks for the retail and e-commerce industry, which had the highest proportion of applications with high-risk open source vulnerabilities. Eighty-three percent of audited applications contained high-risk vulnerabilities.

Report Revelations

The status of open source software licenses might be even more troubling -- the research exposed widespread conflicts. More than 85 percent of the applications audited had open source components with license challenges.

Black Duck's report should serve as a wake-up call, considering the widespread use of open source code. The audits show that very few developers are doing an adequate job of detecting, remediating and monitoring open source components and vulnerabilities in their applications, observed Chris Fearon, director of Black Duck's Open Source Security Research Group, COSRI's security research arm.

"The results of the COSRI analysis clearly demonstrate that organizations in every industry have a long way to go before they are effective managing their open source," Fearon said.

The use of open source software is an essential part of application development. Some 96 percent of scanned applications used open source code. The average app included 147 unique open source components.

On average, vulnerabilities identified in the audited applications had been publicly known for more than four years, according to the report. Many commonly used infrastructure components contained high-risk vulnerabilities.

Even versions of Linux Kernel, PHP, MS .Net Framework, and Ruby on Rails were found to have vulnerabilities. On average, apps contained 27 vulnerable open source components.

Significant Concerns

Many of the points Black Duck's report highlights are longstanding issues that haven't registered a negative impact on open source to any great degree, observed Charles King, principal analyst at Pund-IT.

"The findings are certainly concerning, both in the weaknesses they point to in open source development and how those vulnerabilities are and can be exploited by various bad actors," he told LinuxInsider.

With security threats growing in size and complexity, open source developers should consider how well they are being served by traditional methodologies, King added.

Illegal Code Use

The illegal use of open source software is prevalent, according to the report, which may be attributed to the incorrect notion that anything open source can be used without adhering to licensing requirements.

Fifty-three percent of scanned applications had "unknown" licenses, according to the report. In other words, no one had obtained permission from the code creator to use, modify or share the software.

The audited applications contained an average of 147 open source components. Tracking the associated license obligations and spotting conflicts without automated processes in place would be impossible, according to the report.

Some 85 percent of the audited applications contained components with conflicts, most often violations of the General Public License, or GPL. Three-quarters of the applications contained components under the GPL family of licenses. Only 45 percent of them were in compliance.

Open source has become prominent in application development, according to a recent Forrester Research report referenced by Black Duck.

Custom code comprised only 10-20 percent of applications, the Forrester study found.

Companies Ignore Security

Software developers and IT staffers who use open source code fail to take the necessary steps to protect the applications from vulnerabilities, according to the Black Duck report. Even when they use internal security programs and deploy security testing tools such as static analysis and dynamic analysis, they miss vulnerable code.

Those tools are useful at identifying common coding errors that may result in security issues, but the same tools have proven ineffective at identifying vulnerabilities that enter code through open source components, the report warns.

For example, more than 4 percent of the tested applications had the Poodle vulnerability. More than 4 percent had Freak, and more than 3.5 percent had Drown. More than 1.5 percent of the code bases still had the Heartbleed vulnerability -- more than two years after it was publicly disclosed, the Black Duck audits found.

Recommended Actions

Some 3,623 new open source component vulnerabilities were reported last year -- almost 10 vulnerabilities per day on average, a 10 percent increase from the previous year.

That makes the need for more effective open source security and management more critical than ever. It also makes the need for greater visibility into and control of the open source in use more essential. Detection and remediation of security vulnerabilities should be a high priority, the report concludes.

The Black Duck audit report recommends that organizations adopt the following open source management practices:

  • take a full inventory of open source software;
  • map open source to known security vulnerabilities;
  • identify license and quality risks;
  • enforce open source risk policies; and
  • monitor for new security threats.