Chatbots and virtual assistants are popping up in every aspect of a business. Thanks to platforms like EBM, getting started is becoming a lot easier.
Despite this, however, there are a lot of considerations and steps to take as you implement a chatbot into your organisation.
What does it take to implement a chatbot?
There are 5 key considerations to make when you’re about to implement your chatbot:
Implementing a chatbot assumes you have some key steps complete, such as:
When you have these steps are complete, the rest of the chatbot implementation should fall into place.
Let’s run through the remaining stages:
The NLP tool you decide to use will be the backbone of your chatbot. You’ll also need to have a clear list of platforms and software you’ll want to integrate into your chatbot. To name but a few possibilities:
Depending on the integration, you will need to allocate the appropriate front or back end developer resource to make it happen. Typically, this is done by a chatbot agency with expertise and experience to do this efficiently and effectively
If your chatbot needs to carry out certain functions such as collecting and validating customer details or sending a client enquiry to the appropriate team when it’s tag is flagged, then you will need to create some custom logic. This will typically be discovered during the conversation design process.
Alternatively, your business a lot of processes automated with the likes of robotic process automation (RPA) tools such as Blue Prism.
Personally, we push to deploy on to Google cloud because of the simplicity and the fact they developed Kubernetes.
There is the other question of many organisations preferring to host in on-premise environments.
Ultimately there is no right answer and your decision of where and how to host is entirely circumstantial.
As with any software involved in the process of handling personal information, there is always the threat of an open back door which allows hackers to extract sensitive data.
As a general rule, since data security is such a complex and serious issue, we recommend keeping things as simple as possible and where possible not store confidential information unless you have the budget and expertise to execute correctly, or you face heavy fines by the ICO.
Data masking allows the chatbot creator to define patterns of text that should be replaced when they appear in the user message.
For example, a pattern could be made to replace text that looks like a credit card number with stars.
Depending on whether your user is logged in or talking with your chatbot publicly, you’ll need a tweak your message broker to filter out the information accordingly such as:
And so forth.
What current deployment management processes do you have in place for your software development and how do chatbots fit into that?
The likelihood is whatever NLP platform you’ve picked, it won’t have the necessary functionality to adhere to most organisations deployment procedures.
In particular, the likes of Dialogflow and IBM Watson have limited functionality in terms of:
There are a few tools out there tackling this problem, EBM being one of them!
Legal is a critically important part alongside your data management.
The key question to consider when it comes to legal is whether you have any regulatory requirements to abide by.
For example, in the financial brokerage sector, if a chatbot gives any sort of advice around mortgages, it needs to be checked with the Financial Conduct Authority.
Does your country have a mandatory requirement for the chatbot to say its not human? (like they do in America)
The complicated process can be boiled down to these key steps:
This article is well worth reading if you’re still working on some of the stages we’ve mentioned above.
As you can imagine, the factors that can affect the cost of a chatbot are numerous.
The short answer is: the typical range to use someone like us, is:
We have an extensive guide that covers every variable that influences the cost of your build here.