Digital Strategy Digital Techonolgy Highlights

4 Examples of How AR & VR Will Improve Customer Service


What comes to your mind when you hear VR & AR, apart from dizzying delight from the games like Pokemon GO and exciting sci-fi movies? Have you ever thought about customer service? A consistent focal point for businesses is how we interact with our customers. Do they enjoy our customer service? The following passage expands on the usage of AR and VR by illustrating 4 examples concerning restaurants and in-person shopping etc.

It takes 11 minutes to read.

Great customer service is still alive and well in many businesses, but others are seeing more red-flag comments and lowered scores. With the internet at our fingertips at every moment, there’s no hiding a bad experience, so any and all are quickly there for us to post for all to see.

In a world where fears abound surrounding whether virtual reality will have us hunkered up in dark rooms attached to machines straight out of Bruce Willis’s Surrogates — unlikely to see another physical being for weeks at a time — could that same technology actually help or save declining customer service?

Why yes, I think it can.

Where We Currently See Breakdowns in Customer Service

We can all see where voids can happen, causing actual or perceived bad customer service. In fact, you may have already thought of one that happened recently. The lists are endless, but there’s a handful of common complaints plastered on review sites for both online and in-person consumer goods.

Customer service is always a human-to-human interaction in which we want and demand a personal touch, empathy, and help. Where we currently lack the technical abilities to provide such experiences, often across distance, further technological advances in AR and VR can and should be used to provide solutions allowing for better consumer experiences.

Flash forward to a super random date in time, say 2042. (I, personally, hope some of this technology doesn’t take that long, but 42 is always a good answer. For those of you who don’t know why, might I suggest some “light” reading surrounding the topics of interstellar highways, dolphins, and sad robots?)

Anyway, we’re in 2042. We’re still walking around as humans, interacting with each other in a physical world, and we’re consuming products in much the same way we do today.

1. The Dreaded Call Center

Hours of mind-melting hold music, fourteen different numbers to push in the hope of finding the appropriate department, and a bad connection that makes clear communication difficult all plague this consumer torment.

Imagine for a moment, though, that you are instead enlisting an AR device during what is normally a frustrating conversation.

Do you remember when you ordered that amazingly chic, Pinterest-esque cascading lawn fountain? You couldn’t have been more excited for it to arrive, but when it did, you discovered the company accidentally shipped a twelve-foot tall, fully-lit pink flamingo (who even orders something like that?).

On your call to customer service, a normal conversation can range from anything to, “we show we shipped you the fountain, so I can’t do anything since that’s what I show you have,” to “since it’s an item over ‘X’ value, we would need you to ship it back to us at your expense,” or “we’ve never had anyone order something like that.”

Instead, wouldn’t it be easier for both parties to use an AR enabled device, allowing the customer service representative to appear in your space. Now, as you can both see each other, the interaction is much more human. You see him, making him a human you’re less likely to scream at, and he sees you, making you more than just another voice in a long line of complaints.

From there, you walk out to your lawn where your neighbors have gathered to sign a petition in order for the flamingo to be removed, and the representative can see the error.

Now, not only do you have a more human interaction, encouraging compassion on both sides, you’re able to base the remainder of the conversation on information you both share. Whereas the agent previously only had paperwork in which to gauge a proper response, he now has proof and is more likely to come to a swift solution.

2. The Contractor

You’ve done your research and asked for Facebook recommendations, but you still feel apprehensive about hiring someone you’ve never met to do work on your house.

Handymen and contractors are often seen under the guise of swindlers — taking longer than they should and causing themselves extra work to gain extra pay. While many deserve the good reputations they carve for themselves, trust is earned at a personal level.

In this instance, you’ve requested a kitchen remodel, which will require the group to work in your home while you’re away at the office. On just the first, day, you’ve gotten three calls. One stated they didn’t bring the right equipment, so they’d be back at a different time. Another call indicated the work was more intensive than they’d expected — they need to remove a weight-bearing beam — so charges will increase. The third indicates they’ll be changing the layout from what was discussed due to the beam removal.

The team lead tries to describe how it will look, but you’re having a hard time envisioning it. They’d shown drawings to start, but now that those are void, you feel the need to see a new visual in order to agree to the work.

When you finally arrive home, you see your kitchen is in utter disarray, but you can’t make sense out of what kind of progress they did or didn’t make during the day.

When we bring future AR/VR capabilities into the mix, we are able to re-add confidence and trust.
Now, if you wonder what the team of contractors does while you’re away, you simply trigger your AR or VR device. By way of AR, you can view a miniature, 3-D, realtime stream of your kitchen. VR would allow for the same, but in a different way (i.e. via a headset or by viewing a 360 video remotely).

You may be asking why we don’t just use cameras for this now, but remember that just because we have that capability doesn’t make it as useful. One nanny cam in your home doesn’t show the whole picture, and such cameras are easily moved.

At this point, when you get the call from the lead about the beam removal and the subsequent the design changes, you’re also provided with visuals. 3-D models show why the beam must be removed and indicate the extra work involved to shore up the rest of the home.

By way of VR, you can be transported into the newly-designed kitchen, and you are able to make educated decisions as to how you want to proceed.

Naturally, there is a certain amount of time necessary to create these modeled visuals, but when you cannot personally supervise work, the extra time spent putting everyone on the same page is worth the effort for both parties.

3. The Crowded Restaurant

We’ve all been there — the crowded restaurant where the person who fills your water is different from the one who takes your order, and another, still, brings your food. You flag down a server at the next table to indicate you’d like another beverage or to state your food is cold and incorrect, but you’re informed it’s not his/her table.

Often, especially in popular, crowded restaurants, efficiency is a key driver in how operations are run. However, when you are unable to make a personal connection with the person providing service, or indeed are unable to determine which of the many faces are at your disposal, a perceived lack of acceptable customer service can form.

While we cannot fix every challenge within the restaurant industry, augmented reality can be a useful tool in creating an environment in which we feel as though our concerns and care are of top priority.

Imagine for a moment that you’re in Bernadette’s Lettuce Bistro (it’s all the rage in the 2042, and yes, they only serve lettuce). While you may still have the different types of servers attending to your table (water-filler, order-taker, food-bringer), you have another unseen attendant in the back.

In a dedicated corner, out of sight from the public, there is an AR display showing the entire restaurant. It shows the tables, patrons, staff movements, etc.

In between the time the “food-bringer” delivers the food to your table and the when the “order-taker” comes back to check on things, this unsung hero in the back noticed a few things regarding your table.

·Your beverage glass is nearly empty.

·You took a bite of lettuce, made a face, and then looked around for something.

·You’ve put your utensils down and are no longer eating.

In a normal restaurant scenario, you may wait five or more minutes before getting ahold of a server willing to claim your table as his/hers. Five minutes may not seem like a lot of time in the grand scheme of things, but in customer service, that is five minutes for someone to seethe at an increasing rate. Five minutes is enough to make a final decision on the quality of an establishment. It’s plenty of time to build up enough anger to then project onto the awaiting staff, and it’s more than enough time to pull out a mobile device for an online review or scathing social media post.

Instead, Bernadette’s dedicated AR staffer sees these actions as key indicators that your table needs immediate attention. Now, as a server approaches your table, prior to even speaking with you, your concerns are addressed. You’re asked if you’d like another beverage and what can be done to make the lettuce more to your liking.

4. In-Person Shopping

You walk into your local computer store, let’s call it Larry’s Computational Emporium. There, you look at computers for roughly thirty minutes, trying to figure out why one is $1,000 more than another. You’re not a computer engineer, after all. You’d have bought something online if you could, but you’d hoped to speak to a clerk for assistance in making the right decision for your needs.

After thirty minutes, you hike around the Emporium’s aisles, desperately trying to find help. You finally locate eight employees having a Blow-Pop meeting near the tech-enhanced refrigerators (they monitor your weight and eating habits — chastising comments are optional). When you ask for help, you get a round of blank stares followed by a couple eye rolls and the very distinct impression that you are thoroughly bothering these people by asking them to “do their job.”

After fifteen minutes of hearing technical terms and acronyms referring to things outside the realm of your expertise, feeling as though you’re being talked down to by a person ten to fifteen years your junior, you make a slightly-educated decision on a computer.

Instead, let’s head back to Larry’s Computational Emporium with AR/VR. We walk in, and we patrol the computer displays for a few minutes. A moment or two later, we’re approached by a young human asking if assistance is needed. There was still, in fact, a Blow-Pop meeting near the refrigerators, but a wrist device buzzed on one human’s wrist indicating someone was in the computer aisles. (No, that’s not AR or VR, but all sorts of other tech is advancing, as well.)

This human still speaks in technical terms and acronyms, but thankfully you’ve downloaded the Larry’s Computational Emporium AR App, and that translates the jargon in a way that makes sense to you. You’ve already selected your expertise level, so when the associate says RAM, your AR device plops an overlay on the scene with, “it’ll make the computer do things faster.”

It’s easy for someone to say we should just train associates to speak in ways more accepted by the public. The inherent problem, however, is that every member of the public has a different level and understanding when it comes to any subject. One individual may walk in knowing about RAM but needs to know more about graphics cards while the next is buying his/her very first computer. Expecting an hourly associate to immediately and accurately assess a shopper’s existing subject knowledge is impractical and unproductive.

We can use our understanding of these shortcomings to improve customer service in other ways. When we empower the consumer with ways to customize the experience, we are creating situations where the experience is fulfilling, helpful, and efficient for all parties involved.

We See Customer Service in Our Every Day Lives

This will not change as time saunters onward. As advancements in the augmented and virtual reality spaces further enable better experiences across a multitude of platforms, we will better serve our future selves if we remember this technology can also enhance human-to-human interactions.

By Scottie Gardonio, creative manager.


Digital Techonolgy Highlights

Why Elon Musk is Right About AI Regulation


Though AI technology is still in its infancy, its double exponential in development could be the potential hazard putting the fate of all human beings at risk, as most expert observers fail to predict AlphaGo’s winning over the top human players. Elon Musk, calls for “proactive regulation” to be put in place now or in the immediate future because we can know from the history that governments and regulation always move at a slow pace while the technology is gaining momentum. The following passage offers in sight into such an apprehension.

It takes you 7 minutes to read.

I am not surprised that almost everyone who works in AI (specifically Deep Learning) has refuted Elon Musk’s suggestion that government needs to begin regulating AI. The key conversation at the moment is Mark Zuckenberg’s remark that Musk’s assertion was ‘pretty irresponsible’ and Elon Musk’s response that Zuckenberg’s understanding was ‘limited’.

Many professional researchers don’t have a limited understanding of Deep Learning and very few of them have chimed in support of Elon Musk.

Elon Musk is not only a radical thinker, he is also a very disciplined one. There have been plenty of naysayers regarding his ventures like SpaceX and Tesla. However, he has remarkably proven the skeptics wrong and executed in a manner that almost nobody else in this world can replicate. His ventures builds the most complex of machinery in a way that is not only technologically feasible but also economically feasible. Therefore on accomplishments alone, should at least give Musk the benefit of the doubt on this one.

I am writing this blog entry so that I can explore in depth Elon Musk’s reasoning. Musk holds an opinion that is clearly in the extreme minority.

What did Musk actually assert? Here is what he has clarified in a fireside chat after his remark’s during the governor’s meeting:

Click to play Elon Musk’s interview.

Musk clarified that he envisions a government agency forming first and seeking to gain insight into AI and its use initially, without any kind of attempt to regulate by “shooting from the hip.”

The primary objection of anyone knowledgeable about this field is that there is nothing specific that requires regulation (One idea is an automation must never falsely pose as a being human). The field is still in its infancy (despite mastering Go and mastering arcade games from scratch) and the closest thing that we have to ethical rules are the “Asilomar AI Principles.” However, these principles are abstract and in a form that is not concrete enough to define laws and regulation around.

Musk’s fear however is reflected in his statement: “It’s going to be a real big deal, and it’s going to come on like a tidal wave.” Musk speaks about a ‘double exponential’ in the acceleration in hardware and the acceleration of AI talent (note: NIPS 2017 had over 3,500 papers submitted). This ‘double exponential’ means that our predictions of its growth may be too conservative. Musk further remarks that researchers can get so engrossed in their work and overlook the ramifications. Musk’s fundamental stance is that more effort should be placed on AI safety over pursuing AI advances. He argues that if it takes a bit longer to develop AI then this would be the right trail.

What we know about governments and regulation is that they move in a very slow pace. Musk is proactively kickstarting the conversation about government regulation with the calculation that when government eventually becomes ready that AI technology will have advanced enough to allow for meaningful regulation. It indeed is placing the cart before the horse.

Most experts will agree that it is pre-mature to bring up AI regulation. However government, society and culture move at rates that are much slower than technology progress. Musk’s gamble here is that the negative effects of pre-mature regulation outweighs an existential threat. Musk calculates that it is better to be early but wrong than to be late and correct.

The previous American administration had published a report on AI last year. However, the anti-science leanings of the current administration may put a damper on any future government subsidized studies on the effects of AI to society. US Treasury Secretary Steven Mnuchin evenly opined that the threat of job loss due to AI is “not even on our radar screen”, only to walk back his statements a few months later. In short, despite Musk’s statements, it is very unlikely that the current administration will make an effort in this area and would prefer to have ‘market forces’ decide the solution.

Musk sounding the alarm will likely fall into deaf ears for the next four years. Perhaps that is why he brought it up in the Governor’s meeting and like initiatives like climate-action this threat may be taken up by US states instead. Unfortunately, Mark Zuckenberg’s remarks and many of the other researchers objections only gives additional ammunition to other governments to do nothing.

Unfortunately, the examples that Musk gave in the Governor’s meeting to motivate regulation were examples of threats dues to cybersecurity and disinformation and are not necessarily a threat that only AI can perform. (on thinking about this, Musk may have deliberately chosen to avoid a use case that gives malicious actors ideas!) Musk’s best analogy that does make sense is the idea of it being easier to create nuclear energy as compared to containing it.

We are indeed heading into dangerous times in the next four years. It is difficult to imagine what Deep Learning systems will be capable of by then. However, it is likely that Artificial General Intelligence (AGI) will not been achieved. However something very sophisticated in the realm of narrow AI may be developed. More specifically weaponized AI in the domain of disinformation and cyber-warfare. The short term threats are job destruction and cyber-warfare. These are clear and present dangers that will not require the development of AGI.

Toby Walsh of University of South Wales however has a different take:

“We are witnessing an AI race between the big tech giants, investing billions of dollars in this winner takes all contest. Many other industries have seen government step in to prevent monopolies behaving poorly. I’ve said this in a talk recently, but I’ll repeat it again: If some of the giants like Google and Facebook aren’t broken up in twenty years time, I’ll be immensely worried for the future of our society.”

Rachel Thomas of Fast.AI has writes about similar concerns:

“It is hard for me to empathize with Musk’s fixation on evil super-intelligent AGI killer robots in a very distant future. (snip) … but is it really the best use of resources to throw $1 billion at reinforcement learning without any similar investments into addressing mass unemployment and wealth inequality (both of which are well-documented to cause political instability)”

Both opinions revolve about inequality. AI ownership has being confined to a few elite companies. Musk was concerned enough about this that he formed OpenAI. However, it brings up a concrete regulatory issue. Should AI be owned by a few private companies or should it be a public good? If it indeed is a public good, then how shall that be protected?

We coincidentally are exploring a few of these ideas in our Intuition Fabric project.

Update: I believe Musk is aware of A.I. technology that already exists today that can be extremely disruptive and requires serious discussion in regulating. It definitely is an application of Deep Learning, but he has deliberately not been specific to what it is. Suffice it to say that it is in the realm of network intrusion and disinformation.

By Carlos E. Perez


Digital Techonolgy Internet of Things

IoT Evolution West 2017—7 Takeaways from Las Vegas


Despite its seemingly slow and slack development compared with eye-catching technology like Artificial Intelligence, Internet of Things is actually a concept that will revolutionize people’s life style- connected living solutions. Here we come to the IoT Evolution West 2017, meet with the IoT ecosystem and worldwide business leaders and learn how to leverage the power of the IoT to transform and move your business forward. The following passage boils down this year’s IoT Expo down to 7 takeaways.

It takes you 8 minutes to read.

Greetings from rainy Las Vegas! Yes, you read that right. It does pour every now and then in the desert. Despite the less-than-ideal weather conditions, the IoT For All team braved the unexpected moisture and has boiled down 3.5 days worth of presentations into 7 things you need to know.

Get to know IoT evolution.

7 Takeaways from IoT Evolution West 2017

1) LPWAN is Alive and Well

As a modern day Mark Twain might have quipped, the reports of LPWAN’s death are greatly exaggerated. Despite the predictions of new cellular technologies such as LTE-M and NB-IoT displacing the need for Low Power Wide Area Networks(LPWANs), they are alive and well.

Mark Josephson (CEO of Coris) moderated a panel with John Horn (Former CEO of Ingenu), Dave Kjendal (CTO of Senet), and Dane Witbeck (President of Meshify) to discuss why LPWANs are still the right choice for battery-powered, cost-sensitive IoT applications.

Assuming you don’t need to stream video or audio, enterprises should seriously consider LoRa and RPMA technology for certain use cases. New advancements in over-the-air (OTA) firmware updates, position fixing without GPS, and declining costs for radio modules make them an attractive choice and, unlike, some of the emerging cellular technologies, they’re available now.

2) MulteFire is Getting Hotter

MulteFire is a LTE-based small cell technology that operates in unlicensed shared spectrum at 5GHz (same as WiFi) and delivers LTE performance with WiFi simplicity. It provides enhanced coverage and capacity, mature security, and a robust user experience appropriate for ISPs, mobile operators, cable companies, enterprises, and small businesses.

Mazen Chmaytelli, President of the MulteFire Alliance and Senior Director of Business Development at Qualcomm, discussed how removing the financial barriers of licensed spectrum makes it an excellent choice for many indoor and outdoor neutral host deployments within malls, airports, hospitals, venues, campuses, factories, and warehouses.

The MulteFire Alliance was initiated by Qualcomm and Nokia in late 2015 and has been embraced by Intel, Ericsson, Huawei, Boingo, Comcast, Cisco, CableLabs, Sony, SoftBank, and other industry giants. MulteFire uses Listen-Before-Talk mechanisms, power level management, and channel sensing for WiFi coexistence and supports streaming applications like video and voice. If this new communications technology is not on your product roadmap, you might want to consider it. For more details, go to

3) IoT Technologies Will Disrupt the Insurance Industry

The ubiquity of IoT presents numerous opportunities for the disruption of business-as-usual within the insurance industry.

Marc Josephson (Coris) moderated a lively panel with Craig Copeland (Swiss Re), Ashok Nare (Kollabio), and Anand Rao (PWC) to discuss the implications of real-time remote monitoring on risk models and premium calculations.

Timing is still uncertain but the applications for security (homeowners/renters insurance), driver behavior (auto insurance), worker’s compensation (business insurance), and fitness (health insurance) are coming. No longer will premiums be calculated on an annual or semi-annual basis; expect near real-time adjustments to costs as relevant behavioral data is available via IoT technologies.

4) Lessons on IoT From Amazon

John Rossman, former Amazon executive and author of the book “The Amazon Way,” hosted a keynote on how to rethink products in a connected world.

He explained that IoT is much more than simply applying technology to reduce costs and increase business intelligence but rather an opportunity to evolve static products into sticky services that create a long-term, mutually beneficial relationship with end users. Companies that understand this dynamic and build seamless end-to-end solutions, whether for consumers or businesses, will be the big winners when the smoke clears.

5) Processing and Storage Continues its Relentless March to the Edge

Cloud computing is all the rage these days for good reason but many real-time IoT applications require processing and storage closer to the edge. The advantages of edge computing include decreased communications costs, better security, fewer single points of systemic failure, and faster decision times.

Chris Cellberti of the InField Group led a discussion with Robert Lutz (VP of Business Development at Systech Corporation) and Hiep Phan (VP of Research and Development at Virtium) on how gateway/access point compute and storage solutions can enable applications that aren’t possible with a cloud-based approach.

In many IoT use cases, gateways act as local aggregation points in star topologies and are a natural place to augment processing and storage capabilities?—?especially for IIoT use cases.

6) Remote Asset Management is a Killer IoT Application

Asset tracking and remote management is one of the hottest IoT areas right now. Numerous sensor, communications, and software companies are working on integrated solutions that permit always-on, two-way communication and precise tracking of nearly any valuable asset—from cars to boats to containers to heavy equipment.

Angel Mercedes from Sierra Wireless hosted a chat with Julie McGowan (GLOBECOMM), Chuck Moseley (Inmarsat), Sudhakar Marthi (ZOHO Corporation), and Dan Harper (Siren Marine) on how the benefits of IoT are changing the way businesses and consumers monitor their fixed and mobile assets.

Satellite communications, inexpensive and precise location, and solar-powered battery recharging are some of the technological advances that are driving this trend across use cases such as theft reduction, inventory management, supply chain optimization, and proactive preventative maintenance.

7) Large-Scale IoT Deployments are Scarce

And finally, this was not mentioned during a particular track or presentation but rather an observation by speaking with attendees, exhibitors, and presenters. Large-scale enterprise implementations of IoT are extremely rare—regardless of the underlying technology—and most revenue for IoT providers is in systems integration.

Impediments to scale include technological immaturity, high recurring communication costs, expensive sensors with custom firmware, and lack of a proven value proposition. The good news for industry players is that we are in the early innings of a larger automation cycle with the majority of SaaS/PaaS revenue still to come. Just batten down the hatches and make sure you have enough financial runway to weather the bumpy ride.

By Eric Conn


Digital Techonolgy

10 Key Things You Need To Build A Truly Smart Cities


After Mobile World Congress and IoT World earlier this year, there was a lot of buzz about 5G, smart mobility, general IoT, and smart cities. It feels like we’re entering the future, and the excitement is palatable. Unfortunately, there are many soldiers on the battlefield without a plan. The following passage lists 10 key elements that are required for a truly smart cities for the designers.

It takes you 7 minutes to read.

Smart cities need an orchestration framework. The smart cities of tomorrow require more than simply deploying connectivity, sensors, and devices. Incrementalism will not serve cities well. Foresight and planning are necessary to build cities that are truly smart.

Here are 10 key elements that are required for truly smart cities and for understanding any smart city initiative in context.

#1: Ubiquitous Connectivity

It’s tough for a city to be smart without redundant, high-speed, low-latency wireless communications. That’s why 5G has so much attention and is so exciting.

For 5G to be maximally effective, the deployment strategy needs to bring 5G closer to the “action” than where a lot of 4G currently resides. To support real-time decisioning for autonomous vehicles, for example, 5G needs to live on the streets. It needs to be directly paired with curbside cameras, sensors, and processing that can, without a nanosecond of delay, support high-speed vehicles in motion.

Smart city architectures must also include low-power wireless access (LPWA) that supports power-limited devices. For things like battery-powered devices floating in wells that report water level once a day, an energy-efficient communication protocol is paramount in such scenarios.

#2: Resilient and Advanced Energy

Smart city solutions demand advanced energy networks that are sustainable, secure, dynamic, and resilient. You can’t grant a city the moniker of “smart” without resilient, advanced energy.

Consider this. There isn’t an IT engineer on the planet who would build a data center without a UPS (uninterrupted power supply) and backup power. Why would we plan for anything less with high-value city infrastructure? If communications and intelligence systems are only available when the grid is up, we fail citizens during catastrophes and extraordinary events, when they need services the most.

#3: Security and Privacy

We must integrate security into smart city platforms from the start, not as an afterthought. Insecure solutions are not acceptable. Access protocols and communications require an advanced security architecture that keeps out malicious agents. Overrides and mandatory upgrade paths must also be embedded into the architecture to prevent and mitigate the impacts of cyberattacks.

Security is about more than protecting systems and places; it is also about protecting the privacy of citizens that pass near city equipment. Smart solutions must respect citizens.

#4: Sensors and Measurement

Data capture has long been a major focus of smart city work. Smart cities are instrumenting literally everything possible, continuously adding new data capture capabilities. Weather, wind direction and intensity, road surface temperatures and conditions, air quality, radiation, pollutants, foot traffic, vehicle traffic, wildlife, soil moisture, noise pollution, light levels, pollen, water quality, water levels, vibration, tilt, sewage flow rates, valve pressure, and cameras are some types of data that smart city devices collect.

But data alone is not enough to make a city smart. Smart requires a corresponding, tiered architecture for processing that data and acting on the derived conclusions.


#5: Curbside Compute

In the cities of tomorrow, we will be dealing with volumes of data and the need for decision speed, which won’t allow for sending everything back to centralized processing in the cloud. Things will move too fast for proverbial soldiers to wait for central command to tell them when to shoot. Just think about the coming dynamism needed for autonomous vehicles and drones, for example.

Municipal systems cannot be deployed as a set of dumb nodes tied to an intelligent core. We must build high-value nodes with local processing resources that operate seamlessly as a tiered participant in a distributed network. Our municipal infrastructure must also coalesce into an incredible, distributed processing fabric: an extended, “living” system, connected and rich with data.

To support network-efficient, low latency, real-time decisions required for dynamic traffic management, augmented reality and beyond, smart cities need to deploy computational power at key locations and nodes: curbside data centers.

#6: Sidewalk Storage and Caching

Storage goes hand in hand with compute. To be smart, cities also need to deploy storage at the very edges of our communications infrastructure as part of a tiered storage and caching network. The immense volumes of data we’ll accumulate each minute will require pairing with local storage to avoid needless, crippling network congestion.

Consider this. Each autonomous vehicle alone is projected to generate roughly 4 TB of daily data. Beyond vehicles, high-definition video from multiple cameras at thousands of nodes across a city to cloud centers will be impractical and wasteful to ship in as is to central command. Instead, local storage can be paired with local processing to dynamically extract insights and identify data for retention or for relay to cloud resources.

Caching is also important for delivery of next generation content. Augmented and virtual reality cannot withstand latency in delivery of assets. Media assets for such immersive content must be available instantaneously to make those capabilities reliably available throughout a city.

In short, smart cities must develop architectures that extend storage capacity to the sidewalk. They must evolve to feature highly distributed and dynamic storage arrays embedded throughout the city.

#7: Hardware Maintenance and Upgradeability

Even the brightest minds fail in getting everything in complex systems 100 percent right the first time around. Thus, smart city infrastructure needs to be maintainable and upgradable. We simply can’t require ripping up of concrete and working through lengthy planning processes for every improvement to our city infrastructure. Technology moves too fast.

Cities need to standardize the street-side smart city “server rack.”

Deployed infrastructure also needs to anticipate maintenance and manage upgrades and future extensions. That will certainly require technical foresight, but new agreements with municipalities and unions will help define operating norms and allowances.

#8: APIs and Third-party Development

With distributed cache and compute comes the natural question of development platforms for third parties. Smart city technology vendors, in partnership with municipal leadership, and with security in mind, must identify ways to thoughtfully expose access to resources, data and APIs to create new intelligence, apps and experiences.

No company has a monopoly on innovation. Even Steve Jobs couldn’t have foreseen the diversity of applications that emerged from the iPhone platform. Apple relied on third parties and a broad base of development talent to create apps like Instagram, Lyft, and Airbnb — apps that couldn’t be imagined on launch day for the first iPhone.

Of course, we can’t simply create a smart city “app store” that allows developers to self-publish. We must focus our attention on an advanced approval processes to safely and expeditiously determine what gets published to the “production” environment of our streets.

#9: User Interfaces

With consumer devices, we’re on a trajectory for interfaces from wired (PC) to wireless (smartphone) to ambient (Amazon’s Alexa). Similarly, truly smart cities will turn public spaces into interfaces.

Smart cities will define strategies around “ambient” interactions via voice and augmented reality. It’s about a future of smart cities as seamlessly interactive spaces, and it’s worth noting that the idea that “space becomes the interface” has profound implications for architects and urban planners.

#10: Better Design

Historically, municipal and utility infrastructure has tended toward unattractive design — utilitarian, by definition. We can and must do better to make the public infrastructure that supports our daily lives beautiful, inspirational, and engaging.

Smartphones didn’t ignite their rocket-ship trajectory in adoption or capability until product design lit the spark of inspiration and imagination. We won’t realize the full potential of smart cities until design changes attitudes, adoption, and acceleration.

See also: Is location intel the key to citizen-centric smart cities?

By Brian Lakamp, Co-founder and CEO of Totem.


Chatbots Entry: Back To the Future Digital Techonolgy Digital Transformation

5 Ways Chatbots Need To Evolve Before They Go Mainstream


Are you ready for a life framed by chatbots, from daily services like checking pay bill, buying groceries to other activities like hotel booking and college admission? Did you ever feel confused with a touch of irritation when using some newly-launched chatbots that only look intelligent and considerate at first glance? Before these smart talkative bots really taking over, according to the following passage there are limits and challenges to dealt with in the following years, mainly about language, emotion and intent.

It takes you 6 minutes to read.

It’s hard to swing a $400 juicer in Silicon Valley these days without hitting a chatbot. Advances in artificial intelligence have enabled these talkative assistants to become a reality, and they’re now cropping up in many different forms. Facebook has greatly improved its chatbot game in Messenger, and everyone from Mastercard to Maroon 5 is now climbing on board. In a way, voice-controlled chatbots drive personal assistants like Siri on our phones and Amazon Echo in our living rooms. It’s enough to make you believe the bots are taking over.

Except they’re not, at least not yet. The technologies that drive chatbots — and those related to machine learning and AI, in particular — need to advance before “conversation” becomes a standard interface. Just this week, Google recognized this reality with its People + AI Research Initiative, which aims to advance the development of “people-centric” AI systems.

Computers need to understand humans better, in terms of language, emotion, and intent. Big brands are testing the waters with chatbots to ensure they don’t get left behind, but AI must evolve in several important areas before it has a chance of being widely used. Some of the needs are obvious, like improved speech recognition, while others are more subtle, like the ability for chatbots to signal what services they have to offer.

Here are five areas where these talkative bits of AI need to improve before they really take off.

1. Advances in AI and Natural Language Processing

Remember the early days of the web, when pages were a sea of flashing neon and blue links? That’s where chatbots are today. If bots are to reach ubiquity, people need to be able to ask questions and place orders using natural language. Whether that’s through voice or text, users can’t be expected to master a special vocabulary. If you ask Alexa to play a song and she doesn’t understand the first time, no big deal. The user is already committed to the “relationship” and is willing to overlook issues. But if a customer can’t order a movie ticket on the first try with a brand new chatbot, they’ll go elsewhere.

NLP does a reasonably good job today, but it struggles with local dialects, slang, and idioms. Speech recognition programs can learn speech patterns over time — but not if you only call a business once a year. We’re still at the early stages of human-machine interaction.

This all reflects on the image of a brand. Chatbots have to do better than simply replicating the atrocious experience of today’s automated call menus. With social media’s ability to massively amplify a bad customer interaction, businesses will want to get it right. Everything people can do today through the web and mobile apps should also be available through natural language, and we’re just not there yet.

2. Know Your Customer

A huge part of any AI implementation is understanding context. Much as marketing and sales are searching for that mythical 360-degree view of the customer, chatbots need to know more about the individuals they interact with — who they are, how they got here, what they’re looking for, and what they did in the past. How does that information get collected and shared among chatbots? Only after this is answered can bots reliably and consistently respond to people’s needs.

AdmitHub, for example, began working with Georgia State University last year to build a chatbot to handle its college admissions and financial aid workflow. In the early stages, the bot helped the university process questions directed at admissions, financial aid, and student activities offices — and it wound up increasing enrollment yield by a significant margin. Over time, the university expects that the bot will better understand the academic and financial profile of each student as they progress through their studies. And by the time those students return as alumnae, the bot will know everything about them.

3. Machines Chatting with Machines

The web is an amazingly interconnected place. Type any product into Google and you’re instantly connected to merchants that have the exact product you’re looking for in stock. Chatbots need to evolve in a similar way so they can intelligently hand users off to other bots and seamlessly take over a communication.

If I type “I want a burger” into Facebook Messenger, it should be able to broadcast that to other chatbots in a manner they understand, so another service can fill my order. On the web, this is handled through well-defined REST APIs. But the chatbot space has a multitude of APIs competing for attention. It needs a mature conversational API that the industry can get behind so that chatbots can interoperate.

4. Illuminating What’s On Offer

If I interact with an app or a web page, I can instantly see which services are available through links and other elements on the screen. Chatbots don’t have this visual language. When you talk to a chatbot, you’re going in with your eyes closed. What can I ask it? What does it do? Microsoft and Amazon have worked hard to educate consumers about the capabilities of products like Cortana and Echo — and whole articles have been written on the topic. When interacting with a chatbot for the first time, people need to know — will this bot let me choose seats or only buy a ticket? Can I change an appointment or only make one? Am I allowed to customize this restaurant order? With no visual cues, new expectations need to be established, or at least a way to signal what’s on offer.

5. Reading Emotion

Chatbots will provide infinitely better service when they can read facial features and inflections in tone to understand the emotion of the person they’re communicating with. This is partly about simple customer service — if the user is becoming frustrated or angry, it may be time to hand the conversation off to a human. But there can also be an entire class of services, in areas like counseling or therapy, that operate based on the reactions of the user. Advances in AI and computer vision will make this possible, but there’s much work to do.

Chatbots have a promising future, both at work and in our personal lives, but we need to address these challenges before they can enter the mainstream. When they do, we can expect new conveniences and new experiences — and new ways of engaging with customers. But moving too early risks alienating people before they have a chance to see the benefits.

By Andy Vitus, a partner at Scale Venture.


Chatbots Entry: Back To the Future Digital Techonolgy Digital Transformation

How To Design A Conversation For Chatbot?


The flourishing of messenger bots, also known as chatbots indicates the irreversible trend of more integrated and seamless user experiences with conversational UI. It’s actually about conversations, something we participate in every day but doing unconsciously. The following passage lists 4 steps on how to design a conversation for chatbot, starting with a clear understanding of customer needs. Appropriate conversations are thus designed to improve customer engagement with human touch and to satisfy users’ demands.

It takes you 5 minutes to read.

It is so natural for the designer to put himself in the user’s place before setting to work. In his mind, there is a conversation going on between the designer and the user – That gets the designer closer to know what the user needs and wants. There is also the real talk with users giving designers a clear understanding of customer needs. Conversation of this sort can help the designer hit it off with the customer when real conversation begins.

As computers gain the human touch to converse with users, we are on the threshold of transforming conversation into a user interface. Design is more of a dialogue now, with messages flowing back and forth with the customer. There are so many questions daunting the minds of designers while they set out to design conversations. How to create a perfect chatbot experience and design conversation?

1. Understanding the Goals of Customer

Before designing conversation for chatbot, identify and understand the goals of the customer. To be more specific, understand why the client wants to build a chatbot and what does the customer want his chatbot to do. Finding answers to this query will guide the designer to create conversations aimed at meeting end goals. For instance, let us take the case of a customer aspiring to build a hotel bot.

While charting the scripting course, the designer comes across conversation of the type given below.

● User lands on the hotel chatbot
● User wants the best hotel matching his criteria
● The chatbot replies with the name of the best hotel
● User wants to make a booking
● The chatbot asks the user for the stay dates
● User gives out the stay dates
● Chatbot makes the booking

When the designer gets to know why the chatbot is being built he is better placed to design the conversation with the chatbot.

2. Simulating Conversations for Inspiration

The frontrunners that have used simulated conversations with systems or computers have given us an idea as to how these conversations are build. A company selling products trains employees to communicate with clients. Dialogue simulations to help an employee communicate with the client in different situations and prepares him to handle real-life situations.

Take the case of a car salesman taking part in this simulated dialogue with the customer. This is one of the many scenarios created within a system prompting the salesman to communicate with the customer based on what the customer wants.

Simulated dialogues with customers help the salesman grow in confidence to handle different situations. Designers can take a cue from simulated conversations created for systems that sound like normal conversations.

3. Designing Chatbot Interactions

The designer can model the conversation flow based on the type of interactions between the user and a chatbot. These are segmented into structured and unstructured interactions. As the name suggests, the structured type is more about the logical flow of information including menus, choices and forms into account. For instance, a customer buying a product is prompted to fill an order form. Similarly, a buyer ordering item at a restaurant chooses the item from a list. The unstructured conversation flow includes freestyle plain text. Such as conversations with family, colleagues, friends and other acquaintances fall into this segment.

It is important for the designer to understand capabilities of messaging platforms while designing structured conversations. Designing interactions for a platform supporting plain text only (SMS) is altogether different from interactions for a platform supporting custom keyboards. In this case, the designer considers the possibilities of a single field response and multiple field response, and leverages the right one as the case may demand. For unstructured conversations, the designer makes sure that the chatbot supports minimum vocabulary essential for the user to complete his tasks.

Structured conversation can be built using interview method for single field response or embedding a URL in the message taking the user out of the platform for richer messages.

4. Developing Script

Designing interactions for structured and unstructured messages is just the beginning. Developing scripts for these messages will follow suit. While developing the script for messages, it is important to keep the conversation topics close to the purpose served by the chatbot.

The designer ought to keep the sole purpose at the core of scripting, think of all the possibilities before scripting conversations. Take the case of a designer scripting conversations for a chatbot built to assist hotel reservations. The possibilities are overwhelming, from a user asking the chatbot to suggest the best restaurant in the neighborhood to a user asking chatbot’s opinion about the best item in a restaurant.

For the designer, interpreting user’s answers is important to develop scripts for a conversational user interface. The designer also turns his attention to close-ended conversations that are easy to handle and open-ended conversations that allow customers to communicate naturally.

During the script work, if the designer wants the conversation to be as natural as it can be, open-ended questions prove to be the right fit to achieve this objective. With the user saying and typing whatever he wants to, the designer needs to be diligent in framing questions and processing responses. Moreover, when the chatbot doesn’t come with AI brains, creating relevant responses to user queries will become an ordeal. The designer would be better off in framing open-ended questions only when it is extremely essential. Moreover, open-ended questions can steer the conversation away from end goals.

This is how an open-ended conversation can develop between the customer and a chatbot.

The designer is not short of ideas or inspiration even before he sets out to design conversation. The curious nature of designers can help them learn from observation, hearing and experience. For one, simulated conversations can spark ideas in designers – like the conversation between salesman and the customer. Designers now have a task at hand for the reason that conversation is going to be the future of design.


Chatbots Entry: Back To the Future Digital Techonolgy Robot Intelligence Top-Operation

Bots — Why Should Enterprises Bet On Them?


The purpose of chatbots is to support business teams in their relations with customers, but it is also a viable step to build a unified and integrated system for higher efficiency inside an enterprise. According to the following passage, chatbots act as a concierge for employers to enquire anything available in the intranet, an aggregator bot that handles the internal stuff like the leave balance efficiently.

It takes you 6 minutes to read.

In today’s world, enterprises have extensive IT infrastructure — from multiple farms on premise to cloud tenants with various providers, which run many internal custom-built applications and ERP surround applications. Then there are other core applications like customer portals, CRM, etc. and finally the backbone — ERP. In some cases there are multiple ERP systems. With all these systems running independently or forcing collaboration in some cases, one doesn’t achieve the purpose of a unified system for the enterprise. There may have been attempts of defining enterprise architecture and aligning new and old systems together, but it can get complicated or rather expensive, so much so that it out runs the advantages you would otherwise gain from unifying them.

Before we go into how we can unify all the applications and create a homogenized platform, let’s quickly understand why we have to move in this direction:

Security: 100’s of applications, 1000’s of employees and multiple geographies, it is a nightmare for the IT teams to ensure the right access to right people. Most of us are still achieving this but at a huge cost.

Ease of Use: One must login to multiple portals/applications to perform multiple activities. Which in a standalone state is alright, but when customer or sales come in, there is no collective view of information.

Hiring and Managing Field Workforce: The bot’s usage can be used as a tool to hire the right person based on the questions the person is asking the bot for performing a drill of a real-life scenario. And for the map power on the field, this will reduce time for training and making workforce productive at the earliest while tracking performance at the root level.

Report Factories: One depends on the MIS report generating teams to keep receiving reports that are specific to the individual or team. Right from target vs actuals to cash flow status — all are made available to the required people at the right time.

BA Teams: Dependence on BA teams to make sense of various KPI from production forecast to sales forecast and everything in between which is needed by sales and management teams is a huge bottleneck especially in fast moving industries.

Other information : Updates, news, new products, new incentives, offers, etc. are all-over the place. Failure from intranet project leads to their low adoption.

At Acuvate, we are working on integrating all the systems or all the relevant systems using Acuvate AIP/BOT Core through microservices and azure platform with aggregator bot.

This will eliminate redesigning and rebuilding applications to integrate them. Once all the applications are connected to AAIP our AI and ML algorithms will kick-in and provide the relevant data to an employee to make an informed decision. AAIP will also help the executive to keep track of what is going good and what is not good so that he can take any preventive actions in time. This is not a complex NOC, but a system with simple chat interface on different channels where key KPI’s can be monitored and sends alerts based on movement in KPI’s.

BOTs are a new user interface, but when AAIP comes in it makes it even more powerful by being a one stop information point for all teams. Employees access different systems for different things, or have to call/email different departments for different information, be it as simple as pay slips, policies, bidding guidelines, travel arrangements, sales performance, new targets, new product details, current promotions, among others, the list is never-ending.

Acuvate introduces the Aggregator BOT which is built on the philosophy of RATE: (Reform: Assist: Transform: Educate). The BOT can:

Act as a Concierge:

In the process of unifying, bots need to be built for each department / practice / operation and once this is done any employee can just ping the aggregator bot and enquire anything from ESS or the intranet and the aggregator bot checks for permissions levels and then identifies the area of query and passes it on to the relevant bot where the SME bot quickly checks the access levels of the employee for the query posed respond accordingly.

Aggregate Relevant Data:

Aggregator BOT’s primary function is to help employees in sales & marketing roles to convert data from different systems and provide them with information based on which decisions can be taken. In the absence of this, the employee will in all likely have to refer to a logistics portal, supply portal, CRM systems to make his move. There are many cases where organizations had to outsource to backend analysis teams comprising of BAs and Report Generators, this apart from cost also takes time for the information to reach the intended user. The problem doesn’t end here, in the age of millennials where employees are changing jobs frequently, there is huge amount of training that needs to be given on a constant basis to understand reports sent by backend analysis teams. But, with the Aggregator BOT, training can be avoided as the bot is easy to use and is always available.

For IT teams an instruction can given to the bot to disable or enable access to a certain person, one need not check each application and access levels that have to be given or removed, it will also present usage matrix. Similarly, other departments can do the same thing.

Aggregator BOT uses Acuvate BotCore which is a Acuvate AI platform. This is the platform which is architected to scale unlimitedly and connect to as many LOB systems using microservices and Azure infrastructure.

Can we avoid data duplication making bots hitch to various systems and bring it in just for the transaction and destroy the copy once done?

The digital universe is set to expand to 6 trillion terabytes of data this year. How much data does one need to keep provisioning for? How many times does the data duplication finder need to run?

By Rakesh Reddy Mekala, a serial entrepreneur & Cofounder of Acuvate.


Chatbots Entry: Back To the Future Digital Techonolgy

The Complete Beginner’s Guide To Chatbots


Chatbots have been around for decades, but because of the recent advancements in artificial intelligence and machine learning, there is a big opportunity for people to create bots that makes for better services and user experiences.The following passage gives a beginner-friendly introduction to chatbots, with plenty of tools and platforms for your further exploration.

It takes you 12 minutes to read.

What are chatbots? Why are they such a big opportunity? How do they work? How can I build one? How can I meet other people interested in chatbots?

These are the questions we’re going to answer for you right now.

Ready? Let’s do this.

“~90% of our time on mobile is spent on email and messaging platforms. I would love to back teams that build stuff for places where the consumers hang out!” — Niko Bonatsos, Managing Director at General Catalyst


What Is A Chatbot?

A chatbot is a service, powered by rules and sometimes artificial intelligence, that you interact with via a chat interface. The service could be any number of things, ranging from functional to fun, and it could live in any major chat product (Facebook Messenger, Slack, Telegram, Text Messages, etc.).

“Many businesses already have phone trees and they do work though most users get grumpy using them. Text based response trees are much easier and faster and that is what I expect a lot of early bot interactions to be. Sometimes with ability to chat with a live person.” — Josh Elman, Partner at Greylock


If you haven’t wrapped your head around it yet, don’t worry. Here’s an example to help you visualize a chatbot.


If you wanted to buy shoes from Nordstrom online, you would go to their website, look around until you find the shoes you wanted, and then you would purchase them.

If Nordstrom makes a bot, which I’m sure they will, you would simply be able to message Nordstrom on Facebook. It would ask you what you’re looking for and you would simply… tell it.

Instead of browsing a website, you will have a conversation with the Nordstrom bot, mirroring the type of experience you would get when you go into the retail store.

>>Facebook Showing Examples of Chat Bots

Watch this video from Facebook’s recent F8 conference (where they make their major announcements). At the 7:30 mark, David Marcus, the Vice President of Messaging Products at Facebook, explains what it looks like to buy shoes in a Facebook Messenger bot.

Click here to see video.

>>Examples of Chat Bots

Buying shoes isn’t the only thing chatbots can be used for. Here are a couple of other examples:

Weather bot. Get the weather whenever you ask.
@ Grocery bot. Help me pick out and order groceries for the week.
News bot. Ask it to tell you when ever something interesting happens.
@ Life advice bot. I’ll tell it my problems and it helps me think of solutions.
@ Personal finance bot. It helps me manage my money better.
Scheduling bot. Get me a meeting with someone on the Messenger team at Facebook.
A bot that’s your friend. In China there is a bot called Xiaoice, built by Microsoft, that over 20 million people talk to.
See? With bots, the possibilities are endless. You can build anything imaginable, and I encourage you to do just that.

But why make a bot? Sure, it looks cool, it’s using some super advanced technology, but why should someone spend their time and energy on it?

It’s a huge opportunity. HUGE. Scroll down and I’ll explain.

Why Chatbots Are Such A Big Opportunity

You are probably wondering “Why does anyone care about chatbots? They look like simple text based services… what’s the big deal?”

Great question. I’ll tell you why people care about chatbots.

It’s because for the first time ever people are using messenger apps more than they are using social networks.

Let that sink in for a second.

People are using messenger apps more than they are using social networks.

“People are now spending more time in messaging apps than in social media and that is a huge turning point. Messaging apps are the platforms of the future and bots will be how their users access all sorts of services.” — Peter Rojas, Entrepreneur in Residence at Betaworks


So, logically, if you want to build a business online, you want to build where the people are. That place is now inside messenger apps.

Major shifts on large platforms should be seen as an opportunities for distribution. That said, we need to be careful not to judge the very early prototypes too harshly as the platforms are far from complete. I believe Facebook’s recent launch is the beginning of a new application platform for micro application experiences. The fundamental idea is that customers will interact with just enough UI, whether conversational and/or widgets, to be delighted by a service/brand with immediate access to a rich profile and without the complexities of installing a native app, all fueled by mature advertising products. It’s potentially a massive opportunity.” — Aaron Batalion, Partner at Lightspeed Venture Partners


This is why chatbots are such a big deal. It’s potentially a huge business opportunity for anyone willing to jump headfirst and build something people want.

“There is hope that consumers will be keen on experimenting with bots to make things happen for them. It used to be like that in the mobile app world 4+ years ago. When somebody told you   back then… ‘I have built an app for X’… You most likely would give it a try. Now, nobody does this. It is probably too late to build an app company as an indie developer. But with bots… consumers’ attention spans are hopefully going to be wide open/receptive again!”  — Niko Bonatsos, Managing Director at General Catalyst


But, how do these bots work? How do they know how to talk to people and answer questions? Isn’t that artificial intelligence and isn’t that insanely hard to do?

Yes, you are correct, it is artificial intelligence, but it’s something that you can totally do yourself.

Let me explain.

How Chatbots Work

There are two types of chatbots, one functions based on a set of rules, and the other more advanced version uses machine learning.

What does this mean?

Chatbot that functions based on rules:

  1. This bot is very very limited. It can only respond to very specific commands. If you say the wrong thing, it doesn’t know what you mean.
  2. This bot is only as smart as it is programmed to be.

Chatbot that functions using machine learning:

  1. This bot has an artificial brain AKA artificial intelligence. You don’t have to be ridiculously specific when you are talking to it. It understands language, not just commands.
  2. This bot continuously gets smarter as it learns from conversations it has with people.
“Beware though, bots have the illusion of simplicity on the front end but there are many hurdles to overcome to create a great experience. So much work to be done. Analytics, flow optimization, keeping up with ever changing platforms that have no standard. For deeper integrations and real commerce like Assist powers, you have error checking, integrations to APIs, routing and escalation to live human support, understanding NLP, no back buttons, no home button, etc etc. We have to unlearn everything we learned the past 20 years to create an amazing experience in this new browser.” — Shane Mac, CEO of Assist


Bots are created with a purpose. A store will likely want to create a bot that helps you purchase something, where someone like Comcast might create a bot that can answer customer support questions.

“Messaging is where we spend a ton of our time and expect to communicate. It is ridiculous we still have to call most businesses.” — Josh Elman, Partner at Greylock


You start to interact with a chatbot by sending it a message. Click here to try sending a message to the CNN chatbot on Facebook.

Artificial Intelligence

So, if these bots use artificial intelligence to make them work well… isn’t that really hard to do? Don’t I need to be an expert at artificial intelligence to be able to build something that has artificial intelligence?

Short answer? No, you don’t have to be an expert at artificial intelligence to create an awesome chatbot that has artificial intelligence. Just make sure to not over promise on your application’s abilities. If you can’t make the product good with artificial intelligence right now, it might be best to not put it in yet.

“Everyone going after AI to try make this scale seems a little too soon. Texting to a computer that doesn’t understand many things you are saying can be very aggravating. So be careful early not to over promise, and give users guard rails” — Josh Elman, Partner at Greylock


However, over the past decade quite a bit of advancements have been made in the area of artificial intelligence, so much in fact that anyone who knows how to code can incorporate some level of artificial intelligence into their products.

How do you build artificial intelligence into your bot? Don’t worry, I’ve got you covered, I’ll tell you how to do it in the next section of this post.

How To Build Chatbots

Building a chatbot can sound daunting, but it’s totally doable. You’ll be creating an artificial intelligence powered chatting machine in no time (or, of course, you can always build a basic chat bot that doesn’t have a fancy AI brain and strictly follows rules).

“The difficulty in building a chatbot is less a technical one and more an issue of user experience. The most successful bots will be the ones that users want to come back to regularly and that provide consistent value.” — Matt Hartman, Director of Seed Investments at Betaworks


You will need to figure out what problem you are going to solve with your bot, choose which platform your bot will live on (Facebook, Slack, etc), set up a server to run your bot from, and choose which service you will use to build your bot.

“We believe that you don’t need to know how to program to build a bot, that’s what inspired us at Chatfuel a year ago when we started bot builder. We noticed bots becoming hyper-local, i.e. a bot for a soccer team to keep in touch with fans or a small art community bot. Bots are efficient and when you let anyone create them easily magic happens.”— Dmitrii Dumik, Founder of Chatfuel


Here are a ton of resources to get you started.

Platform documentation:

Facebook Messenger
@ Slack
@ Discord
@ Telegram
@ Kik

Great services you can use to build your bot:

Octane AI (I am co-founder and CEO) (bought by Facebook)
howdy’s botkit (raised $1.5+ mil in funding) (raised $8.6+ mil in funding)
IBM’s Watson
Dexter (owned by Betaworks)

“It’s hard to balance that urge to just dogpile the latest thing when you’re feeling like there’s a land grab or gold rush about to happen all around you and that you might get left behind. But in the end quality wins out. Everyone will be better off if there’s laser focus on building great bot products that are meaningfully differentiated.” — Ryan Block, Cofounder of


Other Resources:

How Bots Will Completely Kill Websites and Mobiles Apps by Matt Schlicht

Botlist, an app store for bots.

# The Secret To Building Your Own Facebook Chat Bot In Less Than 15 Minutes by Jerry Wang

# Go Library for Facebook Messenger Bots by Harrison Shoebridge

#How To Build Bots For Facebook Messenger by Facebook

#Building Your Messenger Bot [Video] by Facebook

#Creating a Bot by Rob Ellis


#Telegram Bot API — PHP SDK by Syed Irfaq

#A Beginner’s Guide To Your First (Slack) Bot by Slack

# Slackbot Tutorial by Michi Kono

# Create A Slackbot Using Botkit by Altitude Labs

# Sketch UI Kit For Messenger Bots by Mockuuups

# How to create your own Telegram bot who answer its users, without coding by Chatfuel


Don’t want to build your own?

# Work with 200+ of the best chatbot developers in the world.

Now that you’ve got your chatbot and artificial intelligence resources, maybe it’s time you met other people who are also interested in chatbots.

How To Meet People Interested In Chatbots

Chatbots have been around for decades, but because of the recent advancements in artificial intelligence and machine learning, there is a big opportunity for people to create bots that are better, faster, and stronger.

If you’re reading this, you probably fall into one of these categories:

1. You want to learn how to build a chatbot.
2. You are currently building a chatbot or you have already built one.
3. You want to build a chatbot but you need someone else to help you.
4. You are researching chatbots to see if you and your team should build one.
5. You are an investor potentially interested in investing in chatbot startups.

Wouldn’t it be awesome if you had a place to meet, learn, and share information with other people interested in chatbots? Yeah, we thought so too.

That’s why I created a forum called “Chatbot News”, and it has quickly become the largest community related to Chatbots.

The members of the Chatbots group are investors who manage well over $2 billion in capital, employees at Facebook, Instagram, Fitbit, Nike, and Ycombinator companies, and hackers from around the world.

We would love if you joined. Click here to request an invite private chatbots community.

I have also created the Silicon Valley Chatbots Meetup, register here to be notified when we schedule our first event.

By Matt Schlicht,CEO of Octane AI, Founder of Chatbots Magazine.


Digital Techonolgy Digital Transformation

Can Artificial Intelligence & Robots Fight the Cybercrime Epidemic?


As digital technology relentlessly disrupts and sculpts the global landscape it exposes organisations to opportunities and threats. All evolution comes with challenges and the dark world of cybercrime continues to thrive and is this year’s second most reported economic crime. The following passage analyses the current state and predicts the future of cybercrime from a panoramic and systematic perspective.

It takes you 18 minutes to read.

The recent NHS computer hack using Wanna Decryptor ransomware shut down IT systems with 75,000 attacks in 99 countries. The unprecedented ransomware breach froze computers across the health service with hackers threatening to delete files unless a ransom was paid.

Only last week the popular font sharing site was hacked, exposing 699,464 accounts in the breach. The passwords were scrambled with the MD5 algorithm, which nowadays is easy to crack. The hacker unscrambled over 98% of the passwords into plain text.

According to, the unidentified hacker explained his motives for the attack: “I heard the database was getting traded around so I decided to dump it myself – like I always do”. He said it was “mainly just for the challenge and training my pentest skills.” He exploited a union-based SQL injection vulnerability in the site’s software, a flaw he said was “easy to find.”

These attacks have unleashed a media frenzy, but what of the other undocumented attacks that are happening every minute?

[ Cybercrime – Today’s crime of choice ]

Cybercrime can be committed with minimal resources and from a remote location.

Cybercrime can be committed with minimal resources and from a remote location. The same systems that have made it easier for people to conduct e-commerce and online transactions are now being exploited. Detection of criminals is difficult and it’s a relatively low risk activity for high rewards.

Last year, Ginni Rometty, IBM’s chairman, president and CEO, said “Cybercrime is the greatest threat to every company in the world.”

According to the costs of cybercrime will hit $6 trillion annually by 2021, up from $3 trillion just a year ago.

It is predicted that humans have moved ahead of machines as the top target for cybercriminals. Microsoft estimates that by 2020 4 billion people will be online – twice the number that are now.

In this article, we review the following areas:

1. Cybercrime demystified: what is it?
2. The impact of these attacks on both businesses and individuals
3. The current state of Cybercrime
4. Predictions of future Cybercrime
5. How may we be able to use AI and Robotics to combat Cybercrime?


#Cybercrime demystified: what is it?

Cybercrime is defined as a crime in which a computer is the object of the crime or is used as a tool to commit an offence. Crimes that target computer networks or devices include viruses and denial-of-service (DoS) attacks. Crimes that use computer networks to advance other criminal activities include cyberstalking, phishing and fraud or identity theft.

Broadly, cybercrime can be divided into three areas of attack:

1. Property

Theft is rife in the cyberworld. Criminals can steal a person’s bank details and siphon off money; misuse credit cards to make numerous purchases online; run scams to convince people to part with money; use malicious software to gain access to an organisation’s website or disrupt systems. It can also damage software and hardware.


This crime is when a person’s computer is broken into and personal or sensitive information accessed. This differs from ethical hacking, which many organisations use to check their internet security protection. In hacking, the criminal uses a variety of software to enter a person’s computer and the person may not be aware that his computer is being accessed from a remote location.


This crime occurs when a person violates copyrights and downloads music, movies, games and software. There are peer sharing websites which encourage software piracy and illegal downloading .

Identity Theft

This has become a major problem with people using the internet for cash transactions and banking services. In this cyber crime, a criminal accesses data about a person’s bank account, credit cards, debit cards and other sensitive information to siphon money or to buy things online in the victim’s name. It can result in significant financial losses for the victim, even ruining credit history.

Malicious Software

These are Internet-based software or programs that are used to disrupt a network. The software is used to gain access to a system to steal sensitive information or data or causing damage to software in the system.

2. Individual

This type of cyber crime can be in the form of cyberstalking, distributing pornography, trafficking and “grooming”.

3. Government

Crimes against a government are referred to as Cyber terrorism. This is the least common area of attack, however if successful, it can cause chaos and panic amongst citizens. Disruptions to last year’s U.S. electoral process was attempted by state-sponsored groups. The perpetrators may be terrorist organisations or hostile governments of other countries.

#The impact of these attacks on both businesses and individuals

1. Economic

A primary concern is the impact of these attacks on businesses, the lifeblood of the economy. A recent survey showed that 43% of cyber attacks target small businesses, 75% of which have no cyber insurance. In the wake of these attacks, these companies spent an average of $879,582 because of damage or theft of IT assets. In addition, disruption to normal operations costs an average of $955,429.

The consequences can be severe and it has been reported that 60% of small companies go out of business within six months of a cyber attack.

2. Psychological

Cybercrime is sometimes mistakenly perceived as a victimless crime, however cybercriminals cause their victims emotional, physical and financial trauma,

Terri Howard works for FEI behavioural health, a company that provides support and services to companies in the aftermath of critical incidents. At the ISC Congress in Florida last year, she commented:

“Victims often feel that there has been an invasion of their privacy. People feel victimised, that they’ve suffered a traumatic experience. It is the very same feelings that victims of assault experience. They’re upset, they’re depressed, they feel guilt.”

For some people, the threat of their stolen data being used is as traumatic as it actually happening. Howard referred to the Ashley Madison breach, when a man committed suicide after email threats to expose him. His name was never actually leaked.

#Current State of Cybercrime

1. Virtual Bank Heists

2016 was a year of increased intensity, featuring multi-million dollar virtual bank heists. Until recently, cybercriminals mainly targeted bank customers, raiding accounts or stealing credit cards. However, a new breed of attacker has bigger ambitions and is targeting the banks themselves, sometimes attempting to steal millions of dollars in a single attack.

Gangs such as Carbanak have pulled off a string of attacks against US banks. The Banswift group stole $81 million from Bangladesh’s central bank by exploiting weaknesses in the bank’s security, infiltrating its network, stealing its SWIFT credentials, allowing them to make fraudulent transactions.

2. Cyber Espionage

The world of cyber espionage experienced a notable upsurge and activity was carried out to destabilise and disrupt targeted organisations and countries. Cyber attacks against the US Democratic Party led to the leak of stolen information, becoming one of the main talking points of the US presidential election. The US Intelligence Community attributed the attacks to Russia.

2016 saw two attacks involving destructive malware. Disk-wiping malware was used against targets in Ukraine in January and again in December, attacks which also resulted in power cuts. Trojan Shamoon reappeared after a four-year hiatus and was used against multiple organisations in Saudi Arabia.

3. Business email compromise (BEC) scams

BEC scams which rely on carefully composed spear-phishing emails caused over $3 billion of theft in the past three years.

During the US elections, a simple spear-phishing email provided access to Hillary Clinton’s campaign chairman, John Podesta’s Gmail account without the use of any malware or vulnerabilities.

4. Spam botnets-for-hire

The availability of spam botnets-for-hire, such as Necurs, allowed ransomware groups to mount massive email campaigns during 2016, pumping out hundreds of thousands of malicious emails daily.

Attackers are demanding more and more from victims with the average ransom demand in 2016 rising to $1,077, up from $294 a year earlier.

Attackers have honed a business model that usually involves malware hidden in innocuous emails, unbreakable encryption, and anonymous ransom payment involving cryptocurrencies.

5. Internet of Things (IoT)

The Internet of Things (IoT) is the concept that any device can be connected to an on-off switch to the internet, from our headphones to our washing machines.

Mirai emerged last year, a botnet composed of IoT devices such as routers and security cameras that carried out the largest DDoS attack ever experienced. Distributed Denial of Service attacks amount to a huge numbers of individual systems – usually hijacked – flooding a website with traffic, causing its servers to collapse. Weak security made these devices vulnerable targets for attackers. Several of Mirai’s targets were cloud-related services, such as DNS provider Dyn.

This, along with the hacking of millions of MongoDB databases hosted in the cloud, shows how cloud attacks have become a reality and are likely to increase in 2017.

#Predictions for the Global Cybercrime landscape

1. Mobile

Biometric authentication is starting to happen now and user experience is the motivation over cybersecurity
In the past year RSA, has confirmed that 60% of fraud transaction come from a mobile device. As mobile traffic is ever-increasing and overtakes web transactions, mobile fraud will rapidly grow, especially as banks and retailers serve their customers via mobile apps.

Biometric authentication is starting to happen now and user experience is the motivation over cybersecurity.

Fingerprint, voice, and eyeprint, combined with risk-based transaction monitoring, will be the predominant technology combinations for authentication and fraud management in mobile devices.

2. Card-not-present (CNP) fraud

It is predicted that the launch of 3D Secure 2.0, led by EMVCo will change the e-commerce ecosystem. The new system offers many enhancements to the 1.x password-based, “challenge all” approach. As the scope for in-person fraud diminishes, card-not-present (CNP) fraud is expected to soar to over $7 billion in the U.S. by 2020. Today, online money transfer and bill pay services account for approximately 1 in 5 e-commerce fraud transactions, followed by the hospitality and airline, electronics, jewellery, fashion, entertainment and gaming industries.

3. Phishing

Phishers will aim to increase the duration of a live attack through improved methods. It is also a strong possibility that clever phishing attacks will target cardholder information as breaches and skimming of POS terminals and ATM machines will be far less effective as more terminals are upgraded to support EMV cards.

4. Ransomware

Ransomware is malicious software designed to block access to a computer system, until a sum of money is paid. Increasingly, ransom payments are requested via bitcoin: an untraceable online currency.

From credential-stealing modules like one known as CryptXXX to “aggressive” file encrypts such as Locky, the various forms of ransomware demonstrate today’s cyber criminals’ ingenuity and persistence when it comes to stealing data.

Stephen Wright, General Manager at Cyber Skills Centre, predicts the threat posed by ransomware will only get worse:

“Recently, the most prevalent and newsworthy attacks have been ransomware-based. In the coming 12 months, these will likely have greater sophistication and possibly move to also targeting households, individuals, and mobile devices.”

5. Internet of Things (IoT) attacks

The recent success of Amazon Echo and Google Home demonstrate that IoT is the future for technology in the home. However as the IoT grows and the number of connected devices increases, experts predict that related hacking will escalate.

It is predicted that 96% of senior business leaders will be using IoT by 2020 and, at the moment, this is one of the weakest areas in terms of security.

Up to 200 billion IoT devices will need security by 2020.

6. 3D-printed fingerprints

With advanced technology at hackers’ fingertips, we could have scenarios in which an attacker gains access “to a critical system,” warns Stephen Wright.

He explains: “We often think of fingerprint or retinal scanning being the ultimate passwords, but combine high-quality photography with advanced 3D printing and there’s no reason someone couldn’t copy your fingerprint just by taking a photo of your hand in just the right position”.

Whatever the future holds for cybercrime, one thing is certain: businesses of all sizes will need to have security strategies in place if they want to protect their assets.

#How may we be able to use AI and Robotics to combat Cybercrime?

There is a global shortage of cybersecurity professionals equipped with the skills required to fight the increasing sophistication and expertise of cybercriminals. Cybersecurity unemployment rate has dropped to zero percent and it is predicted that unfilled cybersecurity jobs will reach 1.5 million by 2019.

Should we harness techniques based around artificial intelligence, machine learning and deep learning, rather than recruiting and training more humans to fill this skill deficit in the employment market? After all – a specially programmed AI can ‘think’ about cybersecurity in more complex detail than a human.

1. IBM’s Watson – The next superhero of cybercrime?

IMB’s Watson made its debut in 2011 as a winning contestant on the American quiz show Jeopardy! Originally, the cognitive computing system was designed to take large, unstructured datasets in the English language and pull answers to queries out of that data. Watson has evolved to work on large data sets looking for patterns, rather than the answer to a specific question. For instance alongside the Baylor College of Medicine to help with the study of kinases, an enzyme that can sometimes indicate cancer.

With large quantities of data the speed of using augmented intelligence is impressive. For example, while a doctor may read about 6 medical research papers in a month, Watson can read half a million in circa 15 seconds. From this, machine learning can suggest diagnoses and advice on a course of treatment.

Inevitably, IBM Watson, like its literary namesake, is now working to solve cybercrime.
The Watson for Cybersecurity beta program now helps 40 organisations to use the computer’s cognitive power to help spot cybercrime.

click to see the video

Currently, cybersecurity operations, generally, require a human to spend their time going through alerts of potentially malicious activity – a repetitive and time-consuming process. Teams process over 200,000 security events per day on average and over 20,000 hours per year can be wasted in the pursuit of false alarms.

Cognitive computing is 30-40 percent faster than traditional rule-based systems and results in fewer false positives. Because it learns as it goes, it doesn’t repeat the same mistakes. The more it analyses, the more AI can understand malware and fraudulent activity patterns, which is something that will help cybersecurity professionals level in the fight against hackers.

2. AI Squared (AI2)

Researchers from MIT have created a virtual AI analyst. The platform, AI Squared (AI2), is able to detect 85 percent of attacks – roughly three times better than current benchmarks – and also reduces the number of false positives by a factor of five, according to MIT.

AI2 was tested using 3.6 billion log lines generated by over 20 million users in a period of three months. The AI trawled through this information and used machine learning to cluster data together to find suspicious activity. Anything which flagged up as unusual was then presented to a human operator and feedback was issued.

3. Deep Learning

While there are a number of companies using machine learning to fight hacking and cybercrime, there are those who are already looking to take the technology to the next level with the use of deep learning. One of those is Israeli firm Deep Instinct, which lays claim to being the first company to apply deep learning to cybersecurity.

Deep Instinct aims to detect previously unknown malicious threats, the sorts of attacks that might otherwise slip through the net, because they’re too new to be noticed.
It’s simple for malicious software developers to enable their creations to evade detection, as slight modification of the code can make it unrecognisable. However, that can be made much more difficult with the introduction of deep learning.

“We’re trying to make the detection rate as close as possible to 100 percent and make life as difficult as possible for creators of new lines of malware. Today, it’s very easy; they modify a few lines of malware code and manage to evade detection by most solutions. But we hope to make life very difficult for them with detection rates of 99.99 percent,” commented Dr Eli David, Deep Instinct’s CTO and artificial intelligence expert.

#The potential of AI and machine learning

According to 700 security professionals surveyed by IBM the top benefits of using cognitive security solutions were improved intelligence (40%), speed (37%) and accuracy (36%).

IBM say Watson performs 60 times faster than a human investigator and can reduce the time spent on complex analysis of an incident from an hour to less than a minute. The development of quantum computing, which is expected to be more widely available in the next 3 to 5 years could make Watson look as slow as a human.

Machine learning and AI speed up the lengthy process of sorting through data. Quantum computing aims to be able to look at every data permutation simultaneously. Canada based company, D-Wave recently sold its newest, most powerful machine to a cyber security company called Temporal Defense Systems to work on complex security problems.

The rules-based systems of yesterday are no longer effective against today’s sophisticated attacks. Any system that can improve accurate detection and boost incident response time is going to be in demand.

We have clearly reached a point where the sheer volume of security data can no longer be processed by humans. The successful answer to beating the cat-and-mouse game of cybercrime lies in so-called human-interactive machine learning.

Human-interactive machine learning systems analyse internal security intelligence, and marry it with external threat data to direct human analysts to the needles in the haystack. Humans then provide feedback to the system by tagging the most relevant threats. The system adapts its monitoring and analysis based on human inputs, enhancing the chances of finding real cyber threats and minimising false positives.

Deploying machine learning to the laborious first line security data assessment enables human analysis to focus on advanced investigations of threats. The unity of applying AI using a human-interactive approach offers the optimum solution for keeping ahead in the cybercrime war.

It’s important to recognise that while machine learning may be both fast and cheap, it is not perfect.

Algorithms can be manipulated by hackers. Donal Byrne, CEO of Corvil says:

“Those software applications interact with each other in very complicated ways. If someone understands how the algorithm works, it can be manipulated in predictable ways. This means that even without changing the software itself, introducing specific input data can allow one to manipulate an algorithm towards a different outcome than expected.”居中 斜体

“Circuit breakers” can be used to monitor the algorithms’ output to combat this manipulation. This is an ‘overseer’ algorithm or software that can pull the plug – stopping all or a specific portion of the action – whenever it sees divergent conditions beyond a certain threshold.

However this cannot completely solve the problem of rogue algorithms.

When these algorithms are used within large computer systems no human can monitor the volume and speed of the interactions. We have to use algorithms to monitor the performance of algorithms generated by other algorithms.

It is the beginning of what John Danaher calls an algocracy – an algorithm-driven artificial intelligence revolution. “By gradually pushing human decision-makers off the loop, we risk creating a ‘black box society’. This is one in which many socially significant decisions are made by ‘black box AI’. That is: inputs are fed into the AI, outputs are then produced, but no one really knows what is going on inside. This would lead to an algocracy, a state of affairs in which much of our lives are governed by algorithms.”

click to see the video

Global spending on cybersecurity products and services are predicted to exceed £1 trillion over the next five years, from 2017 to 2021.

By 2020, 60% of digital businesses will suffer a major service failure due to the inability of IT security teams to manage digital risk, according to Gartner. If we marry all this new Internet of Things (IoT) data with artificial intelligence (AI) and machine learning, there’s a chance to win the fight against cybercriminals.


When it comes to cyber security, businesses need to act now to tighten up cyber defences. With large-scale security breaches only increasing in number over recent years, organisations both big and small should consider investing in AI systems designed to bolster their defences.

With the Centre for Cyber Safety and Education revealing that the world will face a shortfall of 1.8 million cyber security professionals by 2022, we are reaching a critical point where urgent action is needed.

Not only must organisations invest in preventative AI, but the government must continue to back the development of the next generation of technology professionals. After all, there’s no use in having the technology without skilled humans knowing how to use it.

By Alexandra Dunton,a Business Consultant and Founder of LEX Marketing.


Big Data Digital Techonolgy Robot Intelligence

Voice User Interfaces Are Here To Stay


VUI, Voice User Interfaces, widely known among sci-fi fans such as Jarvis in the superhero blockbuster series Iron Man, are now introduced in people’s real life, enabling people to control a machine through simply talking with technology giants Apple and Amazon publishing their own personal assistant. Until recently it is considered to be artificial intelligence, an essential ingredient of a completely connected future. However, the following passage puts before us not only key advantages but also some limits when using VUI in public spaces. And its deadly defect.

It takes you 6 minutes to read.

When you talk to a human being, there’s never an unrecoverable error state.
— A. Jones, Design Lead at Google

Today, you can pick up your phone, or another magical device, and say: “Show me coffee shops within two miles that have Wi-Fi and are open on Sundays”, and after you’ll get directions to all of them.

The term “conversational UI” is making a lot of headlines right now. The main goal is to turn everything into a conversation, such as turning on the lights, closing or opening doors/windows in your smart home, ordering flowers for your girlfriend or asking the fridge whether you’re out of milk. But what exactly does “conversational” mean? It’s a back-and-forth exchange of information. Each individual fragment is a simple interaction, and the next one has no knowledge of the previous. Each one of these conversational collaborations could be completed on its own. Anyone with a smartphone can easily book plane flights, transfer money, order food or cabs, find local movie times and etc?—?all of that by using nothing more than a regular phone and text. The youngest users today are incredibly adept at two-thumbed texting, multitasking between different chats, commenting on Instagram, swiping left/right on Tinder and blabbing with a friend via FaceTime. Why should we add another mode of communication on top of that?

Yeah, voice has some key advantages:

>> Speed

According to recent Stanford studies, human can dictate text messages much faster than type.

>> Hands-free

Pretty useful while you’re driving/cycling/cooking, and when you’re across the room from your device.

>> Empathy

Humans have issues with understanding tone via the written words, whereas the voice includes volume, intonation, and tone.

>> Intuitiveness

Everyone knows how to talk.

In addition, devices with a small screen (such as smart watches) and no screens at all (Amazon Echo, Google Home, Apple HomePod) are becoming more popular. Voice is often the preferred (or the only) way to interact with them. 35,6M users have used a voice-enable speaker in 2017. +126% over 2016.

The fact that the voice today is the best way for humans to communicate can’t be overemphasized. Imagine the world where you’ve created a technology by which there’s no need to explain to your customers how to use it, cause they already know. How? They can simply ask. But the voice is NOT always a useful “tool” (if I may say so) for your clients.

The main reasons are listed below:

# Public spaces

I’m absolutely sure that many of you, yeah you, who are reading the article right now, work in open-plan office spaces, incubators, coffee shops or live-work lofts. Imagine the situation when you’re asking your computer to do a task: “Find me the last edited .rb file for the past hour?”. Let’s assume, it’s a very common request for Siri. But now imagine how everyone around you doing the same thing! It would be chaos, wouldn’t it? In addition, when you speak, whose computer is listening?

# Discomfort

Voice UI’s (VUI’s) are becoming more commonplace, but not everyone feels comfortable speaking out load to a computer, even in private. Even me.

# Texting

Many people spend hours a day on their mobile phones and type a lot. I think it’s a normal mode because I’m not always in the mood for switching to the voice.

# Privacy

Of course! If you need to discuss some health issues or… I don’t know, some kind of personal stuff, you won’t do so by speaking to your phone on the train ride into work.

So, referring to engineers and designers… do you still think that your mobile app should have a VUI? If your main use case is hands-free (cooking app/playing podcasts while driving)?—?absolutely. If people will use your app in a particularly vulnerable or emotional state (health care/comfort)?—?voice can help them either. If you’re building a skill for Amazon Echo, which many people use in the privacy of their home, the voice is the best option again.

But if your use case is going to be mostly in public places (navigating a public transit system/for people on the go)?—?a VUI might not be appropriate. If it’s an app people will use at the workplace, having a text messaging mode might be better.

Think carefully about your users and their use cases. Will your users benefit from a VUI? Adding a VUI because it’s cool and trendy is not the right thing for you as creative developers. Chatbots can have a VUI, but more typically they use a text-based interface. Most major tech companies like Google, Facebook or Microsoft?—?they have platforms to develop bots.
Although VUIs are becoming more common, there are still many users who are unfamiliar with it or don’t trust it. Many people try out the voice recognition on their smartphone once and then, after it fails, never try it again.

Enabling users to speak to their phones and devices opens up an entire world of experiences. Whether it’s looking up a piece of trivia during a dinner argument, asking a device to dim the lights, or managing the everyday tasks of your life. Voice user interface can enhance them all.

p.s. Don’t forget about one important thing below.

                            “Having a conversation with a system that can’t remember anything beyond the last interaction                                         makes for a dumb and not very useful experience.”

inspired by Cathy Pearl and her extraordinary book “Designing VUIs”

By Irma Kornilova,business development manager at