Join me for a live webinar! |
---|
Tuesday, April 17, 11 am, CEST |
Register here: https://attendee.gotowebinar.com/register/380234600187044099 |
In training course 550 (see Full Immersion Course) I included a small section on showcasing the integration of the Intershop Commerce Management with other systems and tools. I was thinking about having some bit of fun. A few things to just see, relax, sit back and cool down after one week of training with persistent objects, business objects, pagelets, isml and a lot of code writing. And a few things that might inspire you to work more with Intershop and apply your learned knowledge.
As I know from one of the previous Hackathons: We had a team that developed a simple Alexa skill (thanks to Mario and Dorina) quering ICM data. So I mentioned it in this section. In case trainees were interested and wanted to know something about it I could always pop up the slides of that event. However, many asked for much more than only a presentation slide as I must have been not the only one that found that topic interesting. They also wanted to know how to develop such a skill. So I asked myself: Can I serve the data we prepare in our training course for use within the ICM also be served on an Alexa device?
This is what I found out.
The basic setup
From an infrastructure point of view we need Alexa devices, the Amazon developer portal, end points and any number of web servers.
- Alexa device: Here we record the voice over a microphone and will later hear the response. There is also a number of free simulators and apps that serve the same purpose.
- Amazon developer portal: Here we define our new skill. What shall it do? Which sentences shall it understand and respond to? The Amazon speech engine will later use this information to translate voice into JSON strings.
- End point: The endpoint computes the response that Alexa shall later say to the user. Most skills are served from AWS end points.
- Web server: If the endpoint needs data for its response it can query it from any web server. In our case it will be the Intershop Commerce Management.

Having no Amazon Echo or Echo Dot nearby I downloaded an app for my smartphone, created an account on the Amazon developer portal, opened a new account on AWS for me and started the Intershop7 application server from the training. So far so good.
Defining the new Alexa skill
As mentioned above I wanted to serve product storage information from the warehouse use case. It is the use case we train all week in the course. It made sense to call my skill WarehouseSkill, right? Whenever it comes to the creation of such a skill, Alexa has the following terms you need to learn:
- Skill: A skill is a small collection of features serving a common purpose (e.g. a program, an application). Here:
Provide customers with product stock data.
- Name: A skill needs a name. Here:
WarehouseSkill
- Invocator: It is the word that will later be used by the user to wake up the skill (e.g., start the program). I used
warehouse
. The user can now wake up my skill by saying: Alexa, open warehouse. Or: Alexa, launch warehouse. - Intent: An intent is a small feature your skill is providing. Typically, a skill has a certain number of intents. I will implement only one feature (one intent), the
StorageIntent
, where user can get information about the stock data for a product. - Utterance: An utterance (in the code also called sample) is a spoken sentence that can be said by the user for starting an intent. I declared two utterances:
Give storage for product Sony
andProvide storage for product Sony
. - Slot: A slot represents a variable that has a type and is filled during speech recognition. You can later access that variable in the code. Hence, my utterances were rather
Give storage for product {PNAME}
andProvide storage for product {PNAME}
.
So, this is my final IntentSchema definition file (the MyProduct
type is custom and enumerates the products we have in store):
"intents": [ { "name": "StorageIntent", "samples": [ "give storage for {PNAME}", "provide storage for {PNAME}" ], "slots": [ { "name": "PNAME", "type": "MyProduct" } ] } ]
I know that’s not enough, one might also say: Is product Sony still available? Or: Is Sony still on on stock? A skill is often as good as the definition of its utterances. However, I am fine now with two utterances for testing. Surprisingly, it took me a while to declare the slot. In a first attempt, I used productID as the REST call on the ICM needs the productSKU. However, no matter what slot type I used and then said, my slot variable was always empty. I guess, productID is kind of a reserved word. As soon as I changed it, it worked.
Computing the response: A lambda end point
I followed the standard approach in the first try and went to AWS. Logged on, typed in Lambda in the search bar and then created a new lambda function, called also WarehouseSkill
. I hoped that if the skill and the serving endpoint have the same name, they might like each other and do whatever I want them to do (..smile..).
I am surely no expert in JavaScript programming, but all I needed to do was:
- Making a decision whether the incoming request is a LaunchRequest or a StorageIntent, then
- Making a call to my ICM application server’s REST url and finally
- Putting all information into speech output together.
It went on quite smoothly.
1. The first part is pretty straightforward JavaScript coding.
var request = event.request; if (request.type === "LaunchRequest") { ... } else if (request.type === "IntentRequest") { if (request.intent.name === "StorageIntent") { ... } }
2. For calling REST servers you find brilliant code in the Internet. The fewer the code lines, the better. I simply made:
var url = "http://--server--/INTERSHOP/rest/WFS/inSPIRED-inTRONICS-Site/-/stocks/" + pid; var req = http.get(url, function(res) { var body = ""; res.on('data', function(chunk) { body += chunk; }); res.on('end', function() { body = body.replace(/\\/g,''); ... }); req.on('error', function(err) { callback('',err); }); }
Make sure you handle the asynchronity of Node.js! Otherwise you end the skill before the response from the ICM has arrived.
3. In above mentioned training course we create a REST response that looks as follows.
{ "elements": [ { "type": "Link", "attributes": [ { "name": "Count", "type": "Integer", "value": 507 } ], "uri": "inSPIRED-inTRONICS-Site/-/stocks/M4548736000919/Germany", "title": "gtB_AAABcMgAAAFg6egahvpn" }, { "type": "Link", "attributes": [ { "name": "Count", "type": "Integer", "value": 537 } ], "uri": "inSPIRED-inTRONICS-Site/-/stocks/M4548736000919/Mexico", "title": "gtB_AAABcMgAAAFg6egahvpn" } ], "type": "ResourceCollection", "name": "stocks" }
So, I had the country name in the URI as a last part, the stock in the value. A simple loop and split did the trick and my output text was ready to be sent back to Alexa.
var textReturn = "The product is available "; var stockQuote = JSON.parse(body); var elements = stockQuote.elements; var notfirst = false; elements.forEach(function(element) { if (notfirst == true) { textReturn += " and " } var countArray = element.attributes; var count = countArray[0].value; textReturn += count + " times"; var countryString = element.uri; var arrayOfStrings = countryString.split("/"); var last_element = arrayOfStrings[arrayOfStrings.length - 1]; textReturn += " in " + last_element; notfirst = true; } callback(textReturn + ".");
As a last step I went back to the Amazon developer portal and set the end point of the WarehouseSkill
skill to the WarehouseSkill
lambda function on AWS. Copy and paste of an URL. Want to see it working?
Conclusions
I immediately replaced the mentioning of the Alexa integration in course 550 with a task. That is nothing you should just see on a slide, you should do it. It’s fun. You can play around with it and have Alexa saying whatever you want.
In retrospect, things that took some time, were:
- The
productID
as a slot name did not work. Solution: Change slot name. - Setting the trigger to the lambda function on AWS. The trigger is the incoming Alexa request, but it was never given as an option. Solution: Just google the problem, many people have experienced the same thing. You need to change the regional settings of AWS. Not all regions offer Alexa request handling.
- The
WarehouseSkill
was completed but not enabled on my devices/apps. Solution: Go toalexa.amazon.com
. SelectSkills
in the left menu, thenMySkills
in the top right corner and finally choose the sectionDev Skills
. It’s a little bit hard to find how to add an skill under development as it does not appear in any skill store yet.
Things that worked out right away were:
- Creating the lambda function.
- Calling the Intershop Commerce Management from AWS lambda and including that data into the speech response.
Now I ask myself: Why are we not offering more speech services to our customers? Services like a Alexa, can you let me know when my order number 123 will arrive? Or in the B2B business, standing in front of an empty shelf in the warehouse: Alexa, please put nails 543 in my inTRONICS wishlist. A lot of services can be provided upfront (product information, product selection, product wishlisting) and after sales (order information, order delivery, similar product suggestion). Services that are not difficult to implement, do not conflict with privacy data protection laws and your customers might really like them!