February 11, 2019

Phase Change CEO Steve Bucuvalas featured on the InfluenceNow! podcast

February 7, 2019

by Todd Erickson1

Phase Change’s Inventor, Founder, and CEO, Steve Bucuvalas, was featured in the January 31, 2019, episode of the InfluenceNow! podcast, hosted by Justin Craft2.

The InfluenceNow! podcast highlights startups, exceptional business influencers, and ideas from a variety of industries that influence the world.

Steve and Justin discussed how Phase Change and the technology behind Mia, the first cognitive agent for software development, became a reality.

The interview begins with Steve describing his career leading technology and artificial intelligence (AI) groups in financial services and insurance companies, and his subsequent entrepreneurial career starting and selling two different companies. He tells the story of how a single conversation with the buyer of his second company led to his interest in applying AI technology to the problem of software-development productivity.

At the closing, the buyer said to me, 'What's wrong with you guys in software? AI has changed financial services extraordinarily - increased our productivity 100 times,' which is accurate. 'Why can’t you do that with your own industry?'

That moment led Steve to research the barriers to applying AI to software development, and the development of the human-centric principles that led to the creation of the Mia cognitive agent.

The podcast continues with Steve and Justin discussing why organizations that rely on applications written in the Common Business-oriented Language (COBOL) programming language are Phase Change’s first target market.

COBOL is this 40-50 year-old language that has atrocious legacy problems. Because the code has been around [so long], it runs 85% of the world’s financial transactions and [there’s] 220 billion lines of [active COBOL] code. The programmers are all in their 60’s and they all want to retire, but they keep getting incentives to work a few more years because no one wants to learn COBOL. In fact, some of the kids in computer science [college courses] have never heard of it.

Justin and Steve conclude the interview discussing the productivity gains realized by Mia and Phase Change’s technology, and when it will be generally available.

To learn more about how Steve and Phase Change Software will radically improve software productivity, watch the podcast video below or listen to the audio podcast.


1Todd Erickson is a tech writer with Phase Change Software. You can reach him at [email protected].
2Justin Craft is the Founder and CEO of Cast Influence, a Denver, Colorado,-based turnkey marketing agency. Phase Change Software is a client of Cast Influence.

March 8, 2018

Phase Change scientists publish paper on lessons learned implementing a natural-language chat interface – blog

Our research scientists recently published a workshop paper on the lessons learned implementing the company's natural-language chat interface. This post summarizes the key lessons learned and identifies the open questions we faced during our initial implementation.

Phase Change is developing a ground-breaking cognitive platform and an AI-based collaborative agent, called Mia, that will dramatically improve software development productivity and efficiency. Mia utilizes natural-language processing (NLP) chatbot capabilities so new users can use the technology immediately with little or no training.

towards jarvis, lessons learned implementing NL chat interface paper

The paper, Towards J.A.R.V.I.S. for Software Engineering: Lessons Learned in Implementing a Natural Language Chat Interface, was co-written by research scientists Rahul Pandita, Aleksander Chakarov, Hugolin Bergier, and inventor and company founder Steve Bucuvalas. The full paper text is available here.

The paper

Virtual assistants have demonstrated the potential to significantly improve information technology workers' digital experiences. Mia will help software developers radically improve program comprehension. Then we will gradually expand its capabilities to include program composition and verification.

Here are a few things we learned during the first iteration of the Mia chat interface implementation.

Reuse components to quickly prototype

Instead of building everything from scratch, consider reusing existing frameworks and libraries to quickly prototype and get feedback.

Gradually migrate from rule-based to statistical approaches

With the ever-increasing popularity and efficacy of statistical approaches, teams are often tempted to implement them without enough data to design an optimal work environment.

We have noticed that recent advances in transfer learning require only a small amount of data to begin reaping the benefits of statistical approaches. However, rule-based approaches still allow prototypes to get up-and-running with only a small amount of set-up time.

A rule-based-approach also allowed us to collect more data for a better understanding of the chatbot requirements, and future positioning to effectively leverage statistical approaches.

Adopt recommendation systems

In our testing phase, we learned that although users appreciated honesty when our chatbot did not understand a request, they didn't take it well (to put it mildly) when the chatbot did not provide a way to remedy the situation.

There can be many causes for the chatbot failing to understand a request. For instance, the request might actually fall outside the chatbot's capabilities, or, in our case, one class of incomprehensible requests were due to implementation limitations.

While we can't do much about the former, building a recommendation system for the later class of requests almost always proves beneficial and vastly improves user experience.

For example, the noise in a speech-to-text (STT) component is a major cause of incomprehensible requests. In our fictional banking system, we've created software that allows pets to interact with ATMs, and a Mia user might form a query to discover all of the uses cases in which the actor "pet" participates.

If the user says: "filter by actor pet," we could expect the following transcript from the STT component, which, unfortunately, caused the subsequent pipeline components to misfire:

  • filter boy actor pet
  • filter by act or pet
  • filter by act or pad
  • filter by a store pet
  • filter by actor pass
  • filter by active pet
  • filter by actor Pat

While users will most likely be more deliberate in their subsequent interactions with the STT component, we noticed that these errors are commonplace and very negatively affected user experience.

To remedy the situation, we used a light-weight, string-similarity-based method to provide recommendations. Subsequent observations indicated that users almost always liked recommendations - except when they were too vague.

To avoid annoying users, we came up with two heuristics. First, we provided no more than three recommendations. Second, to be considered as a candidate query for recommendation, the query's similarity measure had to score higher than an empirically determined threshold with respect to incoming requests.

Over time users stop using fully formed sentences

The novelty of using a natural language interface quickly wears off. We observed that most users began sessions by forming requests with proper English sentences, but the conversation was quickly reduced to keyword utterances. Chatbot designers should plan for this eventuality. 😉

Actually, I find this quite fascinating and the natural evolution of conversation. I think of this phenomena as mirroring our natural conversations. When we first meet someone new, we are deliberate in our conversation. However, overtime, conversations are more informal. But that is a topic for future posts.
~Rahul Pandita, Phase Change research scientist
Subliminal priming

In formal conversation study, the entrainment effect is informally defined as the convergence of the vocabulary of conversation participants over a period of time to achieve effective communication. We stumbled on this effect when we observed that users employed an affected accent to get better mileage out of the STT component.

In psychology and cognitive science, subliminal priming is the phenomenon of eliciting a specific motor or cognitive response from a subject without explicitly asking for it.

We decided to see if subliminal priming would expedite entrainment. We began playing back a normalized version of a query with the query responses. That simple change led users to quickly converge to our chatbot vocabulary.

Consider the frequencies of following user request variations in our system:

Query # of users by
Test Subjects
list computations with a negative balance 30
filter for computations where output concept Balance is less than 0 17
filter by balance Less Than 0 16
filter by output concept balance is less than 0 09
show computations where output concept balance is less than 0 01
filter by output balance less than 0 224

By playing back "our system found following instances where output concept balance is less than 0," to each of these request responses, we observed that users began using the phrase "output balance less than 0," more, as shown in the frequency counts.

For the keen-eyed, notice that the repeated proper phrase, "filter by output concept balance is less than 0" is used less. However, remember that over time, users stop using fully formed sentences.

We also observed that talking with affected American or British accents works. This may be a product of an unbalanced training set used during creation of the speech-to-text models. That's why fairness testing is important. But that is yet another topic for future posts.

~Rahul Pandita
Data-driven prioritization

We also realized the benefits of leveraging data to prioritize engineering tasks as opposed to going with your gut.

A pipeline design is often a used for chatbot realization. Like most pipeline designs, the efficacy of the final product is a function of how well the individual components work in tandem within the pipeline. Thus, optimizing the design involves iteratively tuning and fixing various individual components.

So how does one decide which components to tune first? This is where data-driven prioritization can really help. For instance, in our setting, a light-weight error analysis helped on more than one occasion to identify the components we needed to focus on.

I only imagine that data-driven prioritization will become more useful in the future as we experiment with statistical approaches that often have a pipeline design.
~Rahul Pandita

We hope that our observations will be helpful for those embarking on the journey to build virtual assistants. We would love to hear your experiences.

Contact

651 Corporate Circle
Suite 209A
Golden, Colorado 80401
Phone: +1.303.586.8900
Email: [email protected]

© 2024 Phase Change Software, LLC