Jake Burke
eboardheader.png
EboardBanner.png
 
Introduction.png

Problem Brief

While having a face-to-face conversation, nonverbal communication such as vocal tones, facial expressions, and gestures can improve the quality of the interaction. Most forms of text-based computer-mediated communication (CMC) lack these human qualities which make it more difficult for conversation participants to recognize emotions such as sarcasm, humor, and anger. However, in recent years, several studies have shown that emojis can facilitate emotions in communication and serve as a nonverbal function in a text conversation (1, 2, 3). Emojis are already widely adopted in text-based CMC, with nearly every instant messaging platform in existence supporting them.  To provide some context of the importance of emojis, over five billion emojis were sent per day on Facebook Messenger in 2017 and half of all Instagram comments included an emoji as of mid-2015. As the large set of emojis grow, manually searching and selecting them becomes a tedious task that interrupts the flow of text entry. In an attempt to solve this issue and help the emoji entry process, products began to implement keyboards that automatically predict emojis. These type of keyboards predict emojis in two variations: word-level and semantic-level, both of which are elaborated on in the section below. Next, according to a report by Swiftkey in 2015, users typing with their mobile keyboard inputted over one billion emojis throughout a four-month period. Although over 800 emojis were available to users during that time period, traditional “face” emojis (e.g., 😀) took up nearly 60% of all emojis sent. Roughly 70% of the messages containing emojis expressed a positive emotion, and only 15% of the messages expressed a negative emotion.

Although text-based CMC is a great advancement for mankind and has revolutionized how we interact with other people, we discovered conversations are less meaningful. Emojis are a step in the right direction in making online interactions more human and emotional; however, the problems with emoji selection and emoji prediction are apparent. Additionally, even with improved emoji selection tools, text-based CMC is still far from replicating face-to-face conversations. Therefore, we think there is still potential to get closer to that bar

 
Problem.png
 


 
Research.png

Word-Level vs. Semantic-Level

Before I go into detail of the research performed for this research problem and design, it is important that I elaborate on the two types of emoji predictions that exist today.

Word-Level

Word-level emoji prediction is the level most of us use daily and is the one we are most familiar with. The way it works is that relevant emojis appear in a candidate list based on the recent keyword typed by the user. Both Gboard and iOS keyboard use word-level emoji prediction.

Semantic-Level

Semantic-level emoji prediction is a newer type of emoji prediction that only exists in a few keyboards (e.g., Dango). In semantic-level prediction, emojis are based on the meaning of the entire message content rather than just specific words. This type of prediction aims to give users recommended emojis that better relate to the emotion within the message rather than just the most recently typed word.

Here below is a chart that gives examples of word-level and semantic-level emoji prediction:

In word-level, the suggested emojis are related to the literal meaning of certain words while in semantic-level, the suggestions focus on the meaning of the entire sentence.

How semantic-level works & it’s limitation

​To predict emojis on a semantic-level, emojis must be linked with the meaning of the messages. Semantic keyboards achieve this by using one of two methods, either embedding or direct representation.  Embedding methods leverage the official description of an emoji to learn its emotional and semantic content (1, 2). For example, the “face with tears of joy” emoji (😂) can be represented by the words “face,” “tears,” and “joy.” On the contrary, direct representation methods take advantage of neural networks to map features of the text to emojis. This technique is proposed by Felbo where they gathered 1.2 billion tweets containing 1 of 64 common emojis to train a model where the unaltered text content of the tweets was mapped to emoji outputs. The model achieved state-of-the-art results in several sentiment classification tasks; however, there were several limitations to their approach. First, most emojis were predicted based purely on the emotion of the message rather than the literal meaning.  For example, if “happy birthday” was the input, the predicted emojis were happy face emojis (😊😘) rather than the birthday cake emoji (🎂). Second, the model was trained to only handle 64 emojis, although, those emojis were the most common on Twitter (many of them were faces rather than objects).

Study Introduction

​To get a strong insight of the research problem, my partner on this project and I decided to conduct in-depth user research.  The specific topics we wanted to explore through this research were:

Study Introduction.png

Through investigating these topics we aimed to discover straightforward and obvious ways to improve the overall messaging experience. To conduct this study, we built an Andriod keyboard using the open source project AnySoftKeyboard that allowed us to switch the keyboard’s emoji prediction from word-level to semantic-level and vice versa with ease. Additionally, our keyboard tracked five types of actions for us:

  1. Characters typed

  2. Characters deleted

  3. When an emoji is selected from the prediction panel

  4. When an emoji is selected manually from the keyboard

  5. Emojis deleted

Pictures of the keyboard we built for this study are below:

With this keyboard, we compared the effects of word-level prediction and semantic-level prediction on users and their conversations by conducting two different studies over two months. The first study was a Laboratory Study that investigated the effects of prediction in online conversations for both senders and receivers among 24 participants. The second study was a Longitudinal Study that deployed our keyboard onto 18 participant’s phones for a minimum of 15 days to gather realistic data of emoji use.


Crafting Our Survey

In addition to the objective data we received from our keyboards recording actions, we obtained subjective data by making participants complete surveys throughout each study. The survey we designed included usability rating questions via Likert scales(link to it) ranging from 1 (strongly disagree) to 7 (strongly agree) as well as open-ended questions. Condensed versions of the Laboratory Study survey and Longitudinal study surveys are presented in their respective section below.

Research Goals

Emojis have been shown to enrich conversations (1, 2), but the role that emoji prediction systems play in this matter have not been explored; instead, prior work on prediction systems have mainly focused on retrieval accuracy, precision, and recall (1, 2, 3). How do different prediction types influence emoji usage? How do they differ in terms of usability? How do they influence the chat experience, and specifically, the engagement and the clarity of the conversation? Overall, our goal was to examine the impact of emoji prediction in online conversations. To summarize it into two specific questions:

Competitive Analysis

Before moving forward with the research, we preformed a brief competitive analysis on the most popular keyboards in use right now:

DangoCA.png

Laboratory Study

Although in-lab experiments are not generally representative of realistic conditions, they are useful for studying conversations because they allow data to be gathered from both sides—emoji senders and emoji receivers. This research design enabled us to explore how the emoji prediction systems might affect the two conversational sides differently. If a person uses more emojis because they find it easier to do so, a recipient might react in two ways: they might enjoy the conversation more and reciprocate, or they might enjoy the conversation less due to frustration or confusion. In this laboratory experiment, pairs of participants had conversations using our keyboard with different emoji prediction mechanisms. We focused on determining how different emoji prediction mechanisms affected the chat experience, and if the effect was different for senders and receivers.

To provide some background of the procedure of the Laboratory Study, I listed details about it below:

  • We recruited twenty-six participants (15 females, 11 males) between the age 18 and 34 years old (M=28.9, SD=4.2)

  • Participants were recruited via emails, word-of-mouth, and convenience sampling in a university setting here at The University of Washington

  • The participants were randomly divided into 13 pairs.

  • The pairs were constructed such that the participants did not know one another and did not meet face-to-face until the end of the study.

  • Participants were provided with Nexus 6P smartphones running Google Android 7.0 for the study.

  • Our keyboard was installed on each phone, and Wechat was used as the instant message application, because Wechat provided a function to export the chat history.

  • Participants were told that they would take part in an online chatting experiment using our mobile keyboard.

  • They chatted with another participant for three 10-minute sessions, each of which was assigned to one of three emoji prediction conditions: no-prediction, word-level prediction, or semantic-level prediction.

  • The order of the conditions was fully counterbalanced.

  • The participants were told that they could use a “recent activity” as a starting topic of conversation, but they could also steer the conversation towards any topic of their choosing.

  • The participants were also told that the only difference between the sessions would be the keyboard’s emoji prediction results, but they were not told anything about those mechanisms, what they were, or how they worked.

    Before the study began, participants were told to fill out a questionnaire about their chatting and emoji behavior:

 

The questionnaire about online chatting and emoji use behavior.

 

Additionally, participants filled out a questionnaire asking about their chat experience after each 10 minute session:

This part of questionnaire probed their engagement (Q1, Q2, and Q6) and expressiveness (Q3, Q4, and Q5) regarding the chat experience, both of which were derived from prior work on CMC [20].  

 

The survey questions about the chat experience. Answers were provided via Likert scales ranging from 1 (strongly disagree) to 7 (strongly agree).

 

When the participants used word-level or semantic-level prediction in a session, they also completed the usability questionnaire shown below, which was adapted from the SUS survey

 

The usability survey for the prediction keyboards. Answers were provided across Likert scales ranging from 1- 7.

 

At the end of the 30-minute session, participants were asked to fill out a form with two open-ended questions:

 
 
 

An infographic of the entire Laboratory Study process is presented below:

 

General Results of Laboratory Study

Our dataset includes 12 valid participant pairs with two pairs per order. We collected 12 × 3 = 36 data logs of valid sessions in total, together with 72 surveys regarding the chat experience, 48 usability rating surveys for emoji prediction and analyzed 48 responses to the open-ended questions using inductive analysis [19]. Among the 24 participants, 22 stated that they always communicate with their phones, while the other two stated that they only use their phones sometimes. Nine participants stated that they always use emojis in online conversations, 14 sometimes, and one seldom. As for their main emoji entry method in real life, 14 participants manually picked emojis from the list, one participant used word-level prediction from the keyboard, and nine used both methods.

The descriptive results of the logged data are shown below:

Means (and standard deviations) of total characters

TotalCharacters is the number of characters excluding emojis sent in the conversation, TotalEmojis is the number of emojis sent in the conversation, and SelectedEmojis is the number of emojis picked from the prediction list if the prediction function was on.

Results and Findings of Laboratory Study

We were able to make multiple discoveries thanks to both our objective data and subjective data. They are separated and listed below:

Objective Findings:

  • A prediction being on had no significant effect on TotalCharacters indicating that the prediction mechanism did not affect the overall amount of conversation that participants had.

  • Although the total number of emojis participants used across conditions was consistent, participants selected much more semantic-predicted emojis than word-predicted emojis.

  • Participants felt that the suggestions by the semantic-level prediction mechanism were more useful than those from the word-level prediction.

Subjective Findings:

  • Emoji usage had a stronger effect on senders than receivers in terms of the chat experience (i.e., the expressiveness and engagement).

  • Five participants mentioned that the semantic-level prediction was “convenient” and “time-saving,” while only two participants mentioned the same advantages of word-level prediction. The convenience might come from the relevance of the semantic prediction results.

  • P13 pointed out, “The first one (semantic) is better than the second one (keyword). Showing more emotion-related emojis. The second one is related to the word itself and it makes no sense to use the emoji in the conversation.” P25 preferred the semantic-level prediction because it was “reflective of the tone of the message.” Six participants commented that it was “fun” to use semantic-level prediction. P14 wrote, “They (semantic predictions) show humorous emojis in a positive way.” Although P19 did not use many emojis during the study, they stated that “their (emoji) appearance in prediction bars makes me feel good.” This feedback supports our finding that people choose more emoji suggestions from semantic-level prediction over word-level prediction.

  • Only two participants preferred word-level prediction, and they did so because it sometimes provided more emoji options than semantic-level prediction.

  • Participants also mentioned that they did not usually insert emojis within their messages, so it would be less distracting if the suggestions were only relevant for the end of their sentences. Three participants suggested that the word-level prediction should be “aware of emotions,” which was exactly what semantic-level provides.

    I urge you to read the full research paper around this study which goes in-depth about the different data analysis we used to reach these findings. To view it click here.

Longitudinal Study

To explore the longitudinal effects of the different prediction mechanisms as well as how they performed in an everyday setting rather than a controlled laboratory setting, we conducted a 15-day field deployment with 18 participants. Given the results of the laboratory study, the longitudinal study focused on the usability of the emoji prediction mechanisms and their effect on emoji usage during everyday conversations.

Similar to last section, I listed details about the Longitudinal Study procedure below:

  • We recruited eighteen participants (8 females, 10 males) between 18 and 43 years old (M=24.0, SD=6.4)

  • Participants were recruited via emails, word-of-mouth, and convenience sampling in a university setting here at The University of Washington

  • Participants had to use English as their primary language and use a smartphone with Android 6.0 on a daily basis.

  • Those who were in the laboratory study were not allowed to participate in the longitudinal study to avoid any bias with knowledge about the prediction mechanisms.

  • The 15-day study contained three 5-day periods (occasionally six or seven days due to scheduling issues).

  • All participants used the no-prediction keyboard in the first 5-day period as a baseline. During the second period, half of the participants used the word-level prediction keyboard while the other half used the semantic-level prediction keyboard. Everyone returned to the no-prediction keyboard during last period to determine whether they went back to their baseline behavior.

  • When each participant was enrolled, they were asked to fill out the same questionnaire about online chatting and emoji (same one as the Laboratory Study).

  • After each 5-day period, participants met with me individually to have his or her keyboard reconfigured to another condition and to fill out a survey about the experience.

  • After the second period when emoji prediction was active, each participant also completed an additional usability survey (same one as the Laboratory Study).

    Surveys for each leg of this Longitudinal Study are shown below:

 
SurveyAfterPeriod1.png
 
 
 
 

The survey questions after each period. The emoji prediction was on only during period 2, thus the questions are different.

 
 

An infographic of the entire Longitudinal Study process is presented below:

 

General Results of Longitudinal Study

Throughout this study we collected 18 × 3 = 54 data logs, 18 survey results about the usability of the emoji prediction, and 54 open-question responses using inductive analysis. Among the 18 participants, 14 stated that they always communicate with their phone, three sometimes, and one seldom. Four participants stated that they always use emojis in online conversations, 11 sometimes, and three seldom. As for their main emoji entry method in real life, 10 participants manually picked emojis from the list, one participant used word-level prediction from the keyboard, and seven used both methods.

The descriptive results of the logged data are shown below:

 

The mean of TotalCharacters, TotalEmoji, and SelectedEmoji over the longitudinal study. Within each period block, the left bar indicates the word-level keyboard group, while the right bar indicates the semantic-level keyboard group. Error bars represent ±1 standard deviation.

 

Similar to the Laboratory Study, TotalCharacters is the number of characters excluding emojis sent in the conversation, TotalEmojis is the number of emojis used in the conversation, and SelectedEmojis is the number of emojis picked from the prediction list if the prediction function was turned on. 

Results and Findings of Longitudinal Study

Similar to the Laboratory Study, we were able to make multiple discoveries thanks to both our objective data and subjective data. They are separated and listed below:

Objective Findings:

  • Participants generally used more emojis with a prediction function on than without.

  • On average, participants who used word-level prediction in the second period increased their emoji usage by 31.5% over their baseline, while participants who used semantic-level prediction increased their usage by 125.1%.

  • TotalCharacters was not significantly different between the two periods for either Prediction condition. However, TotalEmojis was significantly different between the two periods for semantic-level prediction but not for the word-level condition. Thus, although emoji usage increased in both conditions, only semantic-level prediction encouraged participants to input more emojis.

  • Those who used semantic-level keyboards entered a larger proportion of emojis from the prediction list instead of manually picking.

  • Results showed that emoji usage increased significantly more with semantic-level prediction than word-level prediction from the first to second period. The change between the first and third periods was not significantly different, indicating that the change in emoji usage was due to the emoji prediction and not time.

Subjective Findings:

  • The feedback that was given through the open-ended questions was similar to that of the Laboratory Study. Semantic-level prediction received more positive feedback than word-level prediction.

  • Seven out of nine people found themselves using emojis more often than before the study in the semantic-level group.  Only three out of nine people thought the same in the word-level group.

  • In the semantic-level group, four participants liked the convenience of the prediction, mentioning that the auto-generated emojis saved them time and “resulted in a faster and better product in regards to being able to seamlessly add emojis into everyday text” (P5). Another frequently mentioned advantage of the semantic-level mechanism was relevance. For example, P11 commented, “I must say that the predictions were accurate most of the times ... It could guess when my sentences have a positive connotation and a negative one.” P15 mentioned experiences when her choice of emojis was led by the prediction results: “I feel that there have been a few instances in which I would use a particular emoji when using a keyboard that was not enabled with emoji prediction, and when this keyboard suggested a different emoji, I felt that it suited my preferences better.”

  • P12 liked the fact that semantic-level prediction raised awareness of unfamiliar emojis and added more to the conversation than word replacement (P12).

  • In the word-level group, participants expressed more neutral opinions for the prediction mechanism. Two participants liked the relevance of the predictions, which entailed providing options after typing a related word.

  • No participants had knowledge of the prediction mechanism they were using before or during the study. After the researchers told them the details at the end of the study, people were more receptive of semantic-level prediction.

    Again, I urge you to read the full research paper around this study which goes in-depth about the different data analysis we used to reach these findings. To view it click here.

Secondary Research

Lastly, we wanted to take advantage of the knowledge that exists on this topic since text-based CMC and emojis are already a deeply researched area. We learned a lot about this topic from these specific sources and research papers which can be found in the references of are research paper which you can find here.

 

 
Define.png

Discussion

Our goal was to examine the impact of emoji prediction on online conversations. In particular, we sought to answer two questions:

We conducted a laboratory experiment and field deployment to address these questions, and in doing so we found that emoji usage had a stronger effect on senders than receivers. We discovered that as supplemental non-verbal characters, emojis serve more as decorations of the messages. Thus, participants did not pay equal attention to the emoji prediction methods as the text entry method.  Although, we were able to find what the different prediction mechanisms influenced most, which is the user experience. Although quantitative analysis did not show significant differences, participants were more excited about the semantic-level prediction. Even not knowing about what exactly the prediction did, they were pleasantly surprised that the predicted emojis were related to the sentiment of the message. In the second study, participants used more emojis in their daily conversations with the semantic-level keyboard than the word- level keyboard, which might indirectly influence their chat experience. From examples of participants, we also know that the prediction even led them into using different emojis than they originally had planned, which directly changed the message. The attitudes toward the uncertainty of the prediction results were also split. Some participants preferred word-level prediction because they knew what emojis to expect after they typed a certain word. On the contrary, some participants enjoyed the uncertainty of semantic-level prediction and wondered about the results after typing the message. 

Problems

Semantic-level prediction is still a developing and thus has many problems with it. Throughout both studies, participants discussed with us problems they had with it that we wanted to take note of when designing our solution:

  • The relevance of the emoji presented to the participant through semantic-level prediction was occasionally bad, with six participants suggesting we increase the variety of emoji options.

  • Participants found semantic-level prediction to be very repetitive and lacked non-facial emojis

  • Some participants wished that the keyboard could better understand sarcasm and special meanings.

  • Semantic-level prediction could be hit or miss among users, sometimes completely missing the meaning of what they were typing.

Comparing the responses after the second and third periods revealed improvement suggestions for the two prediction mechanisms. For semantic-level prediction, six participants suggested increasing the variety of emoji options. For example, P11 found their results to be “very repetitive,” and P13 desired more non-facial emojis. P12 desired a keyboard that could understand sarcasm and special meanings within messages as he “often uses emojis to supplement or change the emotion of the message;” note that this was one of the core reasons Cramer et al. cite that people use emojis in the first place. P5 wanted a predictive keyboard that better supported the chaining of multiple emojis together. P9 wished for a keyboard that could be aware of the app he was using and provide situational emojis. He noted that his mind-set “is very different when texting friends than when writing an email for work.”
For word-level prediction, five participants wanted more relevant predictions. P20 offered a vivid example: “Sometimes the predicted emoji missed the meaning of what I was typing. For example, when responding to a friend who was apologizing to me, I typed, ‘No worries.’ I say this in a positive way, however, the emojis suggested were sad or anxious expressions, probably based on the last word typed, which was ‘worries.’ Therefore the prediction missed the intended meaning of the phrase, so maybe it would be impactful to work on the algorithm to detect multiple words/phrases to better understand the meaning within a message.” The above observation is the very reason for why semantic-level prediction has been proposed in the past (1, 2).

Key Study Discoveries

Through discussion and analysis of our research as well as reiterating on our bolded findings from both studies, we are able to list out the key findings from our research that we want to focus on:

 
 

Additionally, through our secondary research we came across a study where Cramer(8) discovered through a 228 respondent online survey that the three major reasons people use emojis are:

 
 

Design Goals

 
 
 

 
Ideate.png

Brainstorm

At the beginning of ideation, we looked through our research and used divergent thinking to come up with as many ideas as possible. Some of these ideas are:

  • Suggestion diversity. Emoji prediction systems should suggest various types of emojis, ranging from objects of relevance to face emojis for emotion. Although semantic-level prediction was preferred in our study, many participants wanted it to provide more suggestions than just face emojis. Some participants also appreciated that word-level prediction would sometimes suggest rare emojis.

  • Emoji prediction systems could combine the two prediction mechanisms to provide more diverse results. When the user is typing a sentence, word-level prediction could suggest emojis; once the user has finished typing the sentence; semantic-level prediction could suggest emojis that summarize the message, similar to how punctuation marks are used at the end of a sentence. Balancing the two is useful because not all messages contain strong semantic information, and people also use emojis to provide additional information for certain words.

  • Non-intrusive style. Emoji prediction keyboards should only predict emojis when necessary. Some participants only wanted suggestions at the end of messages, as they found the always-on style of semantic-level prediction to be distracting.

  • Personalization. Beyond the most common emoji suggestions, emoji prediction systems should be aware of the user’s personal favorites and usage behaviors. Usage behaviors could be based on categories like facial emojis or heart emojis, or the emotions that the user prefers to express. In addition, it will be useful if the prediction could recognize the recipient or the usage scenario. For example, one might want heart emojis when chatting with a family member with a message app, and object emojis when composing an email.

  • P9 wished for a keyboard that could be aware of the app he was using and provide situational emojis. He noted that his mind-set “is very different when texting friends than when writing an email for work.”  He noted that his mind-set “is very different when texting friends than when writing an email for work.” We could incorporate prediction settings based off who the user is conversing with

  • Word-Level during the writing of messages, and Semantic-Level prediction at the end of each sentence

  • A keyboard that understand sarcasms and special meanings within messages as P12 “often uses emojis to supplement or change the emotion of the message”

  • Drag and drop emojis

  • Add emojis after you sent a message

  • Animated Emojis

Sketches

To get a better idea of what we wanted to make, we sketched out an early version of our messenger and keyboard:

 
 

Wireframes

After sketching an early iteration of our screens, we put together low-fidelity and developed versions of them:

 

FinalProduct.png

Final Product

After a lot of careful consideration, the final iteration is created:

Emoji Bar

Six relevant emojis hover above the users’ message bar to encourage use when replying without interfering with the elements on our minimalistic messenger. Using semantic-level prediction on the last message sent, the Emoji Bar presents emojis that add relevant emotion to the users’ conversations.

Emoji Drag & Drop

Eboard allows users to attach relevant emojis by dragging and dropping predicted emojis directly from the Emoji Bar, resulting in a more interactive and engaging messaging experience.

Semantic-Level Prediction

When a user begins to construct a message, the Emoji Bar begins to semantically predict emojis for selection. Word-level prediction functionality still exists on the Suggestion Bar, thus not restricting users to a single type of prediction. With the two prediction mechanisms co-existing in Eboard, users are able to choose emojis optimally.

Prediction Tab

By adding a semantic-level prediction tab to the emoji section of the keyboard, Eboard provides users with even more options to choose from when the proper emoji is not presented on the Emoji Bar.

 

 
Beyond.png

Reflection

This project was my first attempt to conduct research of this depth and caliber as well as to document the research results into an academic paper. Focusing on this project for three months, I learned a lot about design concepts such as designing a research project, analyzing data, and converting data into new knowledge. Being able to create knowledge and learn something through my own effort and hard work has been incredibly rewarding and gives me confidence when pursuing this discipline. Despite all this, the project had limitations that prevented it from reaching its proper potential.

First, We built our keyboard using two existing open-source libraries, leading to some inconvenience for participants during the studies. For example, some participants who owned an Apple iPhone in the first study complained about the keyboard layout. Adaptation to an unfamiliar layout likely influenced participants’ chat experience during both studies. Moreover, because the existing model only suggested five emojis from 64 possibilities each time, participants did not have as much variety as they might have had. Another limitation was the inconsistency of emoji appearances across different platforms. As pointed out by previous work, the fact that different platforms render the same emoji differently can lead to varied interpretations. Our keyboard renders the emojis as Android would, so they would appear somewhat differently to iPhone users. Participants may have decreased their emoji usage to prevent misunderstandings.

Third, we should have designed the Longitudinal Study to occur over the course of 21 days rather than only 15 days because a participant might be more inclined to message people on their days off such as Friday, Saturday and Sunday rather than on a Tuesday. Lastly, we had limited time to design the keyboard itself because we allocated the majority of our effort to conducting the research and writing the academic paper, aiming to submit it to conferences. There is no animation in our final designs and they are still in the early phases of development. Additionally, we did not perform proper testing on our screens and relied on simple feedback when creating them. I believe a keyboard that implements what we learned from our research has so much potential and there are many features we ideated that we would love to implement.

Next Step

There are many undiscovered ways to improve communicating emotions and human behavior in computer-mediated conversations and this part of our field is just now beginning to evolve. Emojis are already the most popular method to achieve this, however, there is still room for improvement. Our design from this project focused on developing enhanced CMC, but there are multiple ideas that stemmed from our research and would take it even further. The first next step is to add animation to our designs to give them more life and to help us better understand the interactions behind them. Next, we would love to add animations to emojis as well, in order to bring in another level of emotion. While doing secondary research for the design, I found the animated emoji set shown below created by Seth Eckert that is similar to what I would want on all messaging devices. Third, I believe semantic-level prediction can go beyond just emojis and can be applied to gif and image formats, which would add more depth to the conversations. Additionally, I want to work on stronger individual message interactions rather than just attaching emojis. Lastly, I feel we barely scraped the surface with semantic-level prediction in general, and there can be a better implementation of the middle ground between it and word-level prediction. If you look at the features we ideated during the brainstorming phase, there is much more that could be implanted to push this concept forward.

 

By Seth Eckert, see more here: https://dribbble.com/shots/1925708-Emojis

Conclusion

In this case study, we compared the two emoji prediction mechanisms: word-level and semantic-level. Specifically, we explored how prediction mechanisms affect the online chat experience and how people perceive the two prediction mechanisms. The laboratory study showed that the existence of emoji prediction did not have a significant influence on expressiveness and engagement. As other research in this area has found (1, 2, 3), we can conclude that emojis themselves, rather than the prediction mechanism, affects the chat experience most. From our longitudinal field deployment, we found that semantic-level prediction led to an increase in emoji usage and was preferred because of its relevance to emotions.


Since semantic-level prediction has been gaining popularity in commercial text entry applications, we proposed design guidelines for emoji prediction based on the feedback from our experiments. We believe that by incorporating semantic-level information in emoji prediction, researchers can explore new ways of providing better experiences in text-based computer-mediated communications. Language is becoming visual. Emoji, stickers, and GIFs are exploding in popularity, despite the fact that it’s still labour-intensive to use them in an advanced way. Enthusiasts create personal collections of images for every situation and have memorized every page of the emoji keyboard, but the rest of us rely on using emoji immediately accessible on our “most used” menu and sometimes forward a GIF here and there.


This visual language has matured alongside technology, and this symbiotic relationship will continue, with new technology informing new language, which in turn informs the technology again. Communication in the future will have artificial intelligence tools adapted to you, helping you seamlessly weave imagery with text, and while Dango is proud to be at the cutting edge of this progression there are still improvements to be made. 

Thank you for reading.