Intelligent Personal Assistant with Knowledge Navigation

 

Mr. Amit Kumar1, Mr. Rahul Dutta1, Mr. Harbhajan Rai1, Mrs. Rushali Patil2

1Student, Army Institute of Technology, Pune

2Professor, Army Institute of Technology, Pune

*Corresponding Author E-mail:xentrous@gmail.com, rahuld_15916@aitpune.edu.in, harbhajan_13076@aitpune.edu.in, rpatil@aitpune.edu.in

 

ABSTRACT:

An Intelligent Personal Agent (IPA) is an agent that has the purpose of helping the user to gain information through reliable resources with the help of knowledge navigation techniques and saving time to search the best content. The agent is also responsible for responding to the chat-based queries with the help of Conversation Corpus. We will be testing different methods for optimal query generation. To felicitate the ease of usage of the application, the agent will be able to accept the input through Text (Keyboard), Voice (Speech Recognition) and Server (Facebook) and output responses using the same method. Existing chat bots reply by making changes in the input, but we will give responses based on multiple SRT files. The model will learn using the human dialogs dataset and will be able respond human-like. Responses to queries about famous things (places, people, and words) can be provided using web scraping which will enable the bot to have knowledge navigation features. The agent will even learn from its past experiences supporting semi-supervised learning.

 

KEYWORDS: NLTK, Turing Test, Lemmatization, Levenstein Distance, Conversation Semantics, Semi-Supervised Learning.

 


 

INTRODUCTION:

Whenever we have a conversation, each response is based on the previous sentence heard. If we have any dataset of human conversations then the same rule applies to that dataset. This theory is true for more than 95% of the conversations. The basic idea of developing the bot which can respond on facebook was to enable as many users as possible to interact with the bot making a large learning database for future references.

 

In this paper we try to find the best possible matching line in the database to the input query. We will check the different algorithms for analyzing the dataset to enable faster search operations. MongoDB will be used as the backend database as it provides faster read operations. The database will update itself after each conversation and use this knowledge in the next chat.

 

PREVIOUS WORKS:

Existing chatter bot systems have a few issues which should be removed in order to make the best model. Initial bots used to respond by making changes in the input (like replacing “I” with “You” and “We” with “They” and vice-versa).

Example: 

 

Input: I want to know this.              Output: You want to know this.

 

This type of response by the bot isn’t the best and a better response is provided below.

 

Input: I want to know this.              Output: What do you what to know?

Another instances where these bots fail is the case of answering the common quiz based questions like:

 

Input: Who is Sachin Tendulkar?    or    Input: What is DHCP?    or    Input: When is Independence Day celebrated?

We will add the features of web scraping which will enable the bot to answer these queries and behave more user friendly. Many other bots even tend to ask questions from the users instead of replying to their queries which tend to make the Turing Test impenetrable for these bots.

 

THE LEARNING PHASE:

The Corpus of Conversations:

We have used a corpus of Friends Series Dataset with SRT files of 184 episodes consisting of more than 75000 lines of human conversations. Each line in the conversation corpus is related to the next line of the Dataset. We plan to find the most matching line in the dataset with the input query and reply with the next line of the dataset.

 

Pre-processing of the Corpus:

The corpus was studied and vector for each line was created which will be used using Similarity Check.

The noisy lines consisting of blanks, starting and ending time of the lines and narrators lines are removed so that the bot can learn from noise-free dataset of conversations.

 

Fig 1. General Architecture

 

Algorithm Selection:

For searching a particular sentence in the database, we can use the most number of words matching in the input and the output query. We will use NLTK for removing the infected forms of the same word. Like “pick”, “picked“, “picks” are all different inflected forms of the word “pick”. This process (lemmatization) can be performed using WordNet corpora from NLTK in Python very easily. We will remove the instances of words with length lesser than three (like “a”, ”an”, “on”). We will even remove the words pertaining to the meaning and grammar of the sentence. These are words like “above”, “are”, “the”, “beneath” and “around”. We can then calculate the distance between the input query and each possible output query. The query which is at the least distance from the input query will be considered to generate the response.

 

The issue with this kind of approach is that it neglects any kind of grammar or any meaning associated and the result might be very different from what was originally expected.    

Example:

 

Input: I am gathering things for the picnic.         Input: There were many people at the gathering.

 

In the first sentence gathering is a verb and its original replacement should be gather while in the second case gathering is a noun and its replacement should be gathering itself.

 

Instead of keeping the track of which words appear, we will also have count of number of occurrences of the each word which will make the distance calculation much more reliable. To calculate the distance between these strings we can use L1 or L2 normalized forms.

 

Another approach will be calculating the Levenstein Distance which makes use of Dynamic Programming in order to calculate the number of changes required in the input string to convert it into the output string. While the complexity of this algorithm tend to be O(n*j*k), it still works faster than O(n) time complexity. Where, j and k are the lengths of the input and the output string respectively. We use the fact that any input string will have length not more than 30-40. Thus k*l will be constant factor in the operation and will not add any overhead cost in the complexity. This way we convert the O(n3) complexity approach into O(n) time complexity. Moreover we can introduce parallel programming to search in the database so that multiple lines can be searched at once.

 

In order to answer the knowledge based questions, we make use of the fact that similar types of queries will have similar form of response from the web. We handle one input for each type of query and other similar queries can itself be handled by the bot.

 

CONCLUSION:

We find that the Levenstein based approach can handle the tasks optimally. This even takes care of the ordering of the words in the sentence and even the grammar associated with it. Using parallelism will even reduce the overhead cost by a large margin. We will compare the learning time using Levenstein and the Similarity Based method. A faster approach can be thought of using hashing which will use the Similarity Based approach by saving the frequencies of each word in a vector and then using unordered map or set for constant time searching.

 

REFERENCES:

1.       Alston, W. P. (2000). Illocutionary acts and sentence meaning. Ithaca, NY: Cornell University Press.

2.       Austin, J. (1962). How to do things with words. Oxford University Press.

3.       Bobrow, D. G., Kaplan, R. M., Kay, M., Norman, D. A., Thompson, H., and Winograd, T. (1977). Gus: A frame-driven dialog system. Artificial Intelligence, 8, 155–173.

4.       Chakrabarti, C. (2014). Artificial conversations for chatter bots using knowledge representation, learning, and pragmatics (Ph.D. thesis). University of New Mexico. Albuquerque, NM.

5.       Chakrabarti,C. and George F. Luger(2015) Artificial conversations for customer service chatter bots: Architecture, algorithms, and evaluation metrics. Elsevier, University of New Mexico, Albuquerque, USA

6.       Changeux, J. P. (1998). Conversations on mind, matter, and mathematics. Princeton University Press.

7.       Clarke, D. (1983). Language and action. In A structural model of behavior. Pergamon Press.

8.       Craig, R., and Tracy, K. (1983). Conversational coherence (Vol. 2). Sage Publications.

9.       Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T. K., and Harshman, R. (1990). Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41, 391–407.

10.     Filisko, E., and Seneff, S. (2003). A context resolution server for the galaxy conversational systems, in: Proc. eurospeech.

11.     Fišel, M. (2007). Machine learning techniques in dialogue act recognition. In Estonian papers in applied linguistics.

12.     Garfinkel, H. (1967). Studies in ethnomethodology. Englewood Cliffs, NJ: Prentice Hall.

13.     Ginzburg, J. (2008). Semantics for conversation. King’s College, London: CSLI Publications.

14.     Horvitz, E., and Paek, T. (2000). A computational architecture for conversation. Technical Report. Microsoft Research.

15.     Kaelbling, L., Littman, M., and Cassandra, A. (1998). Planning and acting in partially observable stochastic domains. Artificial Intelligence Journal, 99–134.

16.     Li, Y. (2004). A method for measuring sentence similarity and its application to conversational agents. In Florida artificial research society conference.

17.     Mauldin, M. (1994). Chatterbots, tinymuds, and the turing test: Entering the loebner prize competition. In Proceedings of the 11th national conference on artificial intelligence. Seattle, Washington: AAAI Press.

18.     Metallinou, A., Bohus, D., and Williams, J. D. (2013). Discriminative state tracking for spoken dialog systems. In Proceedings of annual meeting of the association for computational linguistics (ACL), Sofia, Bulgaria.

19.     Mey, J. L. (2001). Pragmatics: An introduction (2nd ed.). Oxford: Blackwell.

20.     Moldovan, C., Rus, V., and Graesser, A. (2011). Automated speech act classification for online chat, In The 22nd midwest artificial intelligence and cognitive science conference.

21.     O’Shea, K., Bandar, Z., and Crockett, K. (2009b). Towards a new generation of conversational agents using sentence similarity. Advances in Electrical Engineering and Computational Science, Lecture Notes in Electrical Engineering, 39, 505–514.

22.     O’Shea, K., Bandar, Z., Crockett, K., and Mclean, D. (2004). A comparative study of two short text semantic similarity measures. Lecture Notes on Artificial Intelligence, 4953, 172.

23.     O’Shea, K., Bandar, Z., and Crockett, K. (2008). A novel approach for constructing conversational agents using sentence similarity measures. In World congress on engineering, international conference on data mining and knowledge engineering (pp. 321–326).

24.     Paek, T., and Horvitz, E. (2000). Conversation as action under uncertainty. In Proceedings of the 16th conference on uncertainty in artificial intelligence (pp. 455–464).

25.     Pang, B., and Lee, L. (2008). Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval, 2, 1–135.

26.     Pask, G. (1976). Conversation theory. Elsevier.

27.     Polifroni, J., and Seneff, S. (2000). Galaxy-II as an architecture for spoken dialogue evaluation. In LREC.

28.     Pomerantz, A. (1984). Agreeing and disagreeing with assessments: Some features of preferred/dispreferred turn shapes. Structures of social action: Studies in conversation analysis.

29.     Porter, M. F. (1980). An algorithm for suffix stripping. Program, 14, 130–137.

30.     Rieser, V., and Lemon, O. (2013). Reinforcement learning for adaptive dialogue systems: A data-driven methodology for dialogue management and natural language generation. Springer.

31.     Sammut, C. (2001). Managing context in a conversational agent. Electronic Transactions on Artificial Intelligence, 3, 1–7.

32.     Saygin, A. P., and Ciceklib, I. (2002). Pragmatics in human–computer conversation. Journal of Pragmatics, 34, 227–258.

33.     Sidnell, J. (2010). Conversation analysis: An introduction. Wiley-Blackwell.

34.   Sidnell, J., and Stivers, T. (2012). Handbook of conversation analysis. Wiley-Blackwell.

 

 

Received on 11.02.2017            Accepted on 18.04.2017           

© EnggResearch.net All Right Reserved

Int. J. Tech. 2017; 7(1): 25-28.

DOI:10.5958/2231-3915.2017.00006.2