IBM's Watson
IBM's Watson
IBM's Watson
OPINION
IBM’s watsonx could be a generative AI game-changer
Generative AI has been much-hyped over the past few months, but still faces serious teething
issues. IBM's watsonx could help solve some of those problems.
IBM this week announced watsonx at Think, and it has the potential to be a generative AI
standout. That’s important because generative AI has hit the tech industry like a Mack
truck and appears to be advancing at an unbelievable rate. Just as quickly, well-founded
concerns about the quality of the massive training set behind it have emerged. First, the
technology is very new to most people and the risks surrounding it aren’t well known.
And second, we still don’t know how to ensure that what’s produced by generative AI
tools is accurate.
In addition, much as with some initial problems with Linux, intellectual property issues
surround generational AI and are scaring creators half to death. Basic productivity tools
such as those used for editing, formatting, and making presentations appear relatively
safe. But when tools like ChatGPT are asked to create or to provide decision support —
or even make decisions autonomously — those IP issues are more pronounced. And the
speed of adoption could lead to future problems if the issues surrounding quality aren’t
adequately addressed.
That’s where IBM comes in. It’s one company that now has decades of operational AI
experience, one that identified years ago what the current concerns would be and
worked to mitigate them long before we heard the term generative AI. IBM, under
Thomas Watson, Jr. decades ago, established a policy of trustworthy products and
services, and has more recently been outspoken that AI should be used to enhance, not
replace, employees.
IBM’s big enterprise and government focus means it has taken reliability, availability, and
security seriously well before I joined that company in the 1980s. Its defenses against
malware are near legendary, its z series mainframe platform is the most reliable and
secure platform in the market, and it’s an industry leader in hybrid computing. Its unique,
secure IBM Cloud differentiates it from the pack and could be something of a model for
generative AI as it evolves.
As AI advances, it will need to embrace a hybrid model because moving and updating the
massive datasets that support it will be impractical, and moving at least some of the
processing closer to the user will be necessary to avoid latency. For instance, if you were
to use this technology on a manufacturing line to assure build quality, excessive latency
could lead to higher failure rates and lower production volumes.
With neural processing units (NPUs), visual processing units (VPUs) coming to market
next year, the need for a desktop solution will only rise. That will require IBM to leverage
one of its many partnerships — and it needs to do so before these partners get too
invested in a different technology.
While generative AI could revolutionize technology, like any new product, it is having
some teething issues when it comes to quality and trustworthiness. IBM’s watsonx could
address these concerns because it’s based on decades of research, past trust-assuring
practices, and mainframe standards that should make it the standard against which other
efforts are measured. If IBM can figure what to do about its non-existent desktop
business, its watsonx stands as the first, best hope for a trustworthy generative AI
solution.
Rob Enderle is president and principal analyst of the Enderle Group, a forward looking emerging technology advisory
firm. With more than 25 years’ experience in emerging technologies, he provides regional and global companies with
guidance.
Follow 👤