March 6, 2018

by Rahul Pandita and Todd Erickson

Our research scientists recently published a workshop paper on the lessons learned implementing the company's natural-language chat interface. This post summarizes the key lessons learned and identifies the open questions we faced during our initial implementation.

Phase Change is developing a ground-breaking cognitive platform and an AI-based collaborative agent, called Mia, that will dramatically improve software development productivity and efficiency. Mia utilizes natural-language processing (NLP) chatbot capabilities so new users can use the technology immediately with little or no training.

towards jarvis, lessons learned implementing NL chat interface paper

The paper, Towards J.A.R.V.I.S. for Software Engineering: Lessons Learned in Implementing a Natural Language Chat Interface, was co-written by research scientists Rahul Pandita, Aleksander Chakarov, Hugolin Bergier, and inventor and company founder Steve Bucuvalas. The full paper text is available here.

The paper

Virtual assistants have demonstrated the potential to significantly improve information technology workers' digital experiences. Mia will help software developers radically improve program comprehension. Then we will gradually expand its capabilities to include program composition and verification.

Here are a few things we learned during the first iteration of the Mia chat interface implementation.

Reuse components to quickly prototype

Instead of building everything from scratch, consider reusing existing frameworks and libraries to quickly prototype and get feedback.

Gradually migrate from rule-based to statistical approaches

With the ever-increasing popularity and efficacy of statistical approaches, teams are often tempted to implement them without enough data to design an optimal work environment.

We have noticed that recent advances in transfer learning require only a small amount of data to begin reaping the benefits of statistical approaches. However, rule-based approaches still allow prototypes to get up-and-running with only a small amount of set-up time.

A rule-based-approach also allowed us to collect more data for a better understanding of the chatbot requirements, and future positioning to effectively leverage statistical approaches.

Adopt recommendation systems

In our testing phase, we learned that although users appreciated honesty when our chatbot did not understand a request, they didn't take it well (to put it mildly) when the chatbot did not provide a way to remedy the situation.

There can be many causes for the chatbot failing to understand a request. For instance, the request might actually fall outside the chatbot's capabilities, or, in our case, one class of incomprehensible requests were due to implementation limitations.

While we can't do much about the former, building a recommendation system for the later class of requests almost always proves beneficial and vastly improves user experience.

For example, the noise in a speech-to-text (STT) component is a major cause of incomprehensible requests. In our fictional banking system, we've created software that allows pets to interact with ATMs, and a Mia user might form a query to discover all of the uses cases in which the actor "pet" participates.

If the user says: "filter by actor pet," we could expect the following transcript from the STT component, which, unfortunately, caused the subsequent pipeline components to misfire:

  • filter boy actor pet
  • filter by act or pet
  • filter by act or pad
  • filter by a store pet
  • filter by actor pass
  • filter by active pet
  • filter by actor Pat

While users will most likely be more deliberate in their subsequent interactions with the STT component, we noticed that these errors are commonplace and very negatively affected user experience.

To remedy the situation, we used a light-weight, string-similarity-based method to provide recommendations. Subsequent observations indicated that users almost always liked recommendations - except when they were too vague.

To avoid annoying users, we came up with two heuristics. First, we provided no more than three recommendations. Second, to be considered as a candidate query for recommendation, the query's similarity measure had to score higher than an empirically determined threshold with respect to incoming requests.

Over time users stop using fully formed sentences

The novelty of using a natural language interface quickly wears off. We observed that most users began sessions by forming requests with proper English sentences, but the conversation was quickly reduced to keyword utterances. Chatbot designers should plan for this eventuality. 😉

Actually, I find this quite fascinating and the natural evolution of conversation. I think of this phenomena as mirroring our natural conversations. When we first meet someone new, we are deliberate in our conversation. However, overtime, conversations are more informal. But that is a topic for future posts.
~Rahul Pandita, Phase Change research scientist
Subliminal priming

In formal conversation study, the entrainment effect is informally defined as the convergence of the vocabulary of conversation participants over a period of time to achieve effective communication. We stumbled on this effect when we observed that users employed an affected accent to get better mileage out of the STT component.

In psychology and cognitive science, subliminal priming is the phenomenon of eliciting a specific motor or cognitive response from a subject without explicitly asking for it.

We decided to see if subliminal priming would expedite entrainment. We began playing back a normalized version of a query with the query responses. That simple change led users to quickly converge to our chatbot vocabulary.

Consider the frequencies of following user request variations in our system:

Query # of users by
Test Subjects
list computations with a negative balance 30
filter for computations where output concept Balance is less than 0 17
filter by balance Less Than 0 16
filter by output concept balance is less than 0 09
show computations where output concept balance is less than 0 01
filter by output balance less than 0 224

By playing back "our system found following instances where output concept balance is less than 0," to each of these request responses, we observed that users began using the phrase "output balance less than 0," more, as shown in the frequency counts.

For the keen-eyed, notice that the repeated proper phrase, "filter by output concept balance is less than 0" is used less. However, remember that over time, users stop using fully formed sentences.

We also observed that talking with affected American or British accents works. This may be a product of an unbalanced training set used during creation of the speech-to-text models. That's why fairness testing is important. But that is yet another topic for future posts.

~Rahul Pandita
Data-driven prioritization

We also realized the benefits of leveraging data to prioritize engineering tasks as opposed to going with your gut.

A pipeline design is often a used for chatbot realization. Like most pipeline designs, the efficacy of the final product is a function of how well the individual components work in tandem within the pipeline. Thus, optimizing the design involves iteratively tuning and fixing various individual components.

So how does one decide which components to tune first? This is where data-driven prioritization can really help. For instance, in our setting, a light-weight error analysis helped on more than one occasion to identify the components we needed to focus on.

I only imagine that data-driven prioritization will become more useful in the future as we experiment with statistical approaches that often have a pipeline design.
~Rahul Pandita

The full paper text is available here.

We hope that our observations will be helpful for those embarking on the journey to build virtual assistants. We would love to hear your experiences.

Rahul Pandita is a senior research scientist at Phase Change. He earned his Ph.D. in computer science from North Carolina State University. You can reach him at rpandita@phasechange.ai.

Todd Erickson is a tech writer with Phase Change. You can reach him at terickson@phasechange.ai.