Looking back: Pidoco at Web Summit 2014

Every year at the beginning of November, Dublin becomes the international tech capital as the Web Summit takes place all over the Irish capital. This event is a real success story! It was first held in 2010 as a meeting of a few hundred Irish IT specialists and now is Europe’s possibly most influential technology event. This year, the Web Summit took place from November 4 to 6 and attracted 22,000 attendees from more than 100 countries.

But it was not only a simple conference; there were actually eight summits – the Enterprise, Machine, Marketing, Music, Builders, Sport, Night and Food Summit – grouped under the umbrella of the Web Summit. The Web Summit also included more events and stages like several startup and attendee workshops, roundtables, the center and library stage and the cinema. Additionally, there were more than 500 speakers, like Drew Houston (founder of Dropbox), Bono (lead singer of U2), Lars Silberbauer Andersen (Global Director of Social Media at LEGO), or Eric Wahlforss (co-founder of SoundCloud), Tony Hawk (Skater) or Mark Pincus (founder of Zynga) just to drop a few names. So we were very excited to be part of Web Summit and curious what to expect in Dublin.

The entrance of the Web Summit main hall.

The entrance of the Web Summit main hall.


After arriving in Dublin on Monday evening, we got a little sightseeing tour of the city using one of the coaches from the airport straight to our hotel. As it was our first time in this beautiful Irish capital, we were impressed by the skyline, the old and new buildings like the Trinity College, Dublin Castle or the Samuel Beckett Bridge and the Docklands as well as by the beautiful parks and statues, and of course by the friendly and cheerful Dubliners.

Dublin at night - The Four Courts.

Dublin at night – The Four Courts.


As the Web Summit started on Tuesday, we got there early to register and were welcomed by the friendly volunteers handing over our passes and wristbands including additional information and helpful hints. (The next day there were even volunteers welcoming visitors dancing to 80’s disco music.)

We basically spent Tuesday and Thursday in the various halls, strolled from stand to stand, had inspiring conversations with exhibitors from all over the world, met old and made new friends. And of course we attended some of the events, e.g. Tom Preston Werner presenting the project codestarter, Peter Thiel as well as Jules Coleman and Blake Mizerany talking about scaling a very small product up to a large product and keeping technology very simple. We were also really impressed by Leland Melvin, the former NFL player who later became an NASA astronaut. We would have loved to attend more presentations and listen to more impressive stories, especially as there were so many different topics and summits. Unfortunately, our schedule was so tight that we were not able to make it to many.

On Wednesday, we presented at the Builder’s summit and used the opportunity to extend our network as well as to talk to visitors and new partners. We had great conversations and received really inspiring feedback on the interactive features, we released this summer.

The Center Stage of the Web Summit.

The Center Stage of the Web Summit.


As the Night Summit was an official part of the Web Summit as well, we also explored Dublin at night and luckily ended up in the “right” pub on Wednesday to see the English rockers The Kooks, met up with friends and connected with new partners while enjoying the delicious Irish pub culture.

The famous Temple Bar.

The famous Temple Bar.


We had a great week in Dublin and were impressed with our very first Web Summit! So we really thank all of you, who visited our stand and contributed to Web Summit. A massive thank you also goes out to the organizers for creating this awesome event! We will definitely come back!


Here are some more impressions of our trip to Dublin:

How To … Create Useful Specification Documents

Prototypes are great blueprints for design and development, but sometimes you need a real specification document, which can include much more than just screenshots of your prototype, e.g. explanations, functional requirements and things invisible on the screenshot, such as interactive behavior. Pidoco allows you to generate such specification documents with only a few mouse clicks and gives you some powerful options. This post takes you through the set-up and export of your specification document.


Choosing the right document type

To generate a specification document start by picking the document format. To do so, open your prototype and select PDF Export or RTF/Word Export from the “Export” dropdown in the toolbar. Pick the PDF export if you want to generate a final, non-editable document, e.g. for final approval. Pick the RTF/Word option if you want to be able to edit the specification document later. The content will be the same in both cases and can be determined in the next step.

Export your prototype  using the "Export" button in the toolbar

Export a specification document of your prototype using the “Export” button in the toolbar

Hint: If you have your existing specification template, you can embed screenshots of your prototype pages as direct links that you can easily update via the respective function, e.g. in Microsoft Word or Powerpoint documents. You can find the respective links in the  “Share” dropdown in the toolbar.


Configuring the specification document

Next, a dialog window will open up and allow you to choose what parts of the prototype should be included in the specification document. You can select individual pages to include and opt to include additional information. To do so, simply toggle the respective boxes.

Select the parts to be included in the specification document

Selecting the parts of a prototype to be included in the specification document


Including Screenflows

Screenflows help you visualize processes (e.g. the log-in process of your website), demonstrate use case scenarios, depict hierachichal structures (e.g. a sitemap) or simply give the reader a quick overview of the pages your prototype contains.

If you have created screenflows and want to include them as screenshots in your specification, select this option. Screenflows are shown in the first chapter of the specification document in alphabetical order.


Example of a screenflow showing a log-in process as appearing in the specification document.

Example of a screenflow showing a log-in process as appearing in the specification document.


Including Pages

Usually, you will want to include some of all of the pages of your prototype, but you can also select just individual pages, for example if you have experimented with several versions of a page, but only need the winning candidate for the specification. Pages are shown as full screenshots in the specification document and appear in alphabetical order, grouped by folder. Pages appear in the second section of the specification document, right after the screenflows.

For optimal legibility, the screenshots are automatically scaled to full page width. Long pages may therefore be split up across two or more pages in the specification document.

Prototype pages listed under "2 Pages" in the specification document.

Prototype pages are listed under “Pages” in the specification document.

Hint: The Comment stencil lets you post notes into your prototype pages that are useful to explain details you don’t want or have the time to prototype. You can change their visibility in the specification document (and simulation) via the Context Menu.


Including Layers

If you are working with layers, you may want to include them in the specification document, for example to show which building blocks make up your prototype and are repeated in various locations. Layers appear as screenshots in the third section of the specification document, right after the pages.

Select “Include page -> layer references” to show all layers in use on a given page after each page in the pages section of the document. If you select “Include layer->page references”, you will find after each layer in the layer section of the document a list of all pages containing the respective layer. This can be very useful information for the developers when trying to understand how the software will work or deciding how to structure the code.

Layers are shown in the last section of the specification document.

Layers are shown in the last section of the specification document. The “Header” layer is used on six pages.


Including Page details

Including pages in the specification document (see above) will only give you the screenshots of the respective pages, but since the specification document is not interactive like the simulation, there may be some vital information missing. Imagine a dropdown menu: You will only see the first entry in the screenshot because the menu dropdown can only be shown in one state, but the developer will need to know all the other entries as well.

Page details let you add this type of information to the specification document by simply selecting the “Include page details” option. Page details include:

  • Information on interactive behavior, e.g. what happens when a link is clicked or when the user performs a certain touch gesture
  • Element configurations, e.g. contents of a dropdown menu or states of elements. This information can easily be copied from the document for reuse.
  • Annotations that ou have added to elements like individual stencils or entire pages via the context menu

Page details will be listed below each page in the specification document and referenced in the screenshots by small numbers (1, 2, 3, etc.).

Prototype pages listed under "2 Pages" in the specification document.

Page details are referenced on the page screenshots and provide information not directly visible in the screenshots.

Hint: To add annotations to an element, open the context menu and type into the text field at the bottom.


Including Discussions

Discussions containing feedback and change requests or further information can be displayed in the specification document, too. This helps you visualize the progress of the project and document the decision-making process. It can also be immensely helpful in resolving arguments with clients about who requested a change or why a certain feature was implemented in this particular way. Discussions will be references as letters in the screenshots of the respective pages (A, B, C, etc.).


Start a discussion in the Simulation View

Discussions can help track and document change requests and decision processes.



Finally click on the “Send” button and, depending on the size of your prototype, you will receive the specification document a few minutes later after the automatic download has been completed.

Message displayed when the download of the specification document starts

Message confirming the successful generation of a specification document


You can customize your specification document to the needs of the different recipients. A client may need different information to sign off on a concept or approve a budget than the developer writing the code, so it can be useful to include only selected sections in the specification document. Whatever configuration you choose for your specification export, Pidoco will remember your last setting, so you won’t have to redo it every time. Once, you have your document set up, one click on the “Export” button in the toolbar will suffice to generate your spec.


That’s it! You have successfully created your first specification document!


Do you need help with your specification document? Just send us a message via support@pidoco.com or FacebookTwitter or Google+.


Happy Prototyping!

How To … Create a Linkable Combo Box

The combo box is a common UI element used in many websites and software applications. This post explains how to quickly prototype a combo box that links to different pages depending on which entry is selected. Here is how it’s done in three simple steps…


Step 1: Create prototype pages

First, create the page that will contain the combo box as well as the pages that the combo box will link to. Here’s an example prototype of a web shop where the combo box is used to navigate to the pages “Products” and “Solutions”:

Screenflow of “Products & Solutions”

Screenflow showing a combo box serving as navigation element

If you want the combo box to be displayed on more than one page, consider placing it on a layer.


Step 2: Add entries to the combo box

Next, add the combo box stencil from the Stencil Palette and place it in the desired position. Double-click on the combo box to edit the entries, e.g. using the names of the target pages.

Adding the Combo Box to the prototype

Adding a combo box stencil to a prototype


Step 3: Define interactions

Finally, we need to add interactivity to the combo box. To do so, open its Context Menu and click on the Interactions tab. In the Interaction Dialog, choose “changes the selection” as the trigger in the first column and select the entry you want to link up under “Value”. In the second column select “show page” as the reaction and select the page you wish to display. Then hit “Save” and repeat for every entry of the combo box you wish to link.

Adding the "changes the selection" interaction to a prototype element using the interaction dialog

Linking up individual entries of a combo box using the “change selection” option of the interaction dialog


That’s it! You have successfully created a linkable Combo Box!


Do you need help with linkable combo boxes? Just send us a message via support@pidoco.com or FacebookTwitter or Google+.


Happy Prototyping!

Top 5 Alternatives to Usability Tests

Usability tests are often perceived as the necessary evil or even worse, they are ignored. This disrespectful treatment has many reasons. Sometimes the project schedule is so tight, that there’s no time to properly test the new website or app before their release. In other cases, decision-makers think that usability tests are too expensive and that necessary fixes can easily be done afterwards. Even where the latter is true, it usually is many times more expensive, time consuming and might result in a painful initial loss of dissatisfied customers. Since we at Pidoco believe that usability testing is an essential part of the development process, we have compiled a list of the top 5 alternatives to classical lab-based usability testing that may work for you, even if you’re on an extremely tight schedule or have no usability budget.

Usability tests can be done quickly and simply.

Usability tests can be run anytime and anywhere without being cost- or time-intensive.


1. Remote Usability Testing

Remote Usability Tests are quite similar to traditional usability tests. The main difference is that the test is not performed in a research lab and researchers and testers do not have to be in the same physical location or even in the same time zone. Instead, they participate using various online tools like web conferencing and screen recording solutions to communicate or document test sessions. One of the great advantages of this method is that the test users can complete a remote usability test while remaining in their natural environment, i.e. in their office or at home. This provides a more realistic testing scenario than a test lab. In addition, scheduling does not depend on lab availability; instead tests can be conducted whenever and from wherever is most convenient. In order to participate, test users typically receive a web link from you to a target page, which gives further instructions on the test including tasks and questions. The website or prototype to be tested is typically also hosted online. Usually, the test session is recorded in order to document the actions and comments of the participant for later analysis. While this type of test does not require a lab, it does depend on a reliable tool set, including a web conferencing and/or phone system as well as a functioning screen recording setup, if you would like to record the sessions. It is key that the tools can easily be operated by your test users and that they can easily access the test object.

You can conduct two different kinds of remote usability test: moderated and unmoderated. Running a moderated remote usability test means that the participant and test moderator are in direct contact during the test session, e.g. via phone or a web conferencing solution, much like in a lab setting. This facilitates communication (e.g. via phone or chat) and allows for direct feedback and help, if necessary. The test moderator can explain tasks or ask questions to gain more insights while the participant can “think aloud”, voicing his intentions or expectations and feelings vis-à-vis the product. During an unmoderated usability test, the participant works independently and has to complete the tasks without direct input from the test facilitator. In this case, there no real-time support or direct feedback is available.

Read more:
Remote usability tests: Moderated and unmoderated by Amy Schade
A moderated debate: Comparing lab and remote testing by Susan Fowler


2. Heuristic Evaluation

Another option is the expert review. This systematic inspection of your product typically requires usability or UX specialists, who are mainly offer their expertise as consultants or via research center. During their work, the experts typically use and click through your website or mobile application like your target group would do, assuming the perspective of a typical user. For this, the researchers work on their own and in their offices or if necessary own testing laboratories. During the testing session, however, they pay particular attention to the details of your product and compare them with accepted best practices – the usability principles (heuristics). This heuristic evaluation is mainly based on the expert knowledge as well as on the latest human factors publications.

In their review, experts analyze a website’s or application’s

  • language and style for simplicity, natural flow and native use,
  • structure for consistency and logic,
  • help sections, error and help messages as well as documentation for clarity and intelligibility,
  • instructions and applicable shortcuts for clarity and findability.

Based on their findings, the reviewing experts typically finish their evaluation with a written report describing problems in terms of their severity using dimensions like frequency (How often does the problem occur?) and persistence (Is it possible to work around the problem or to solve it?) as well as the issue’s impact on successfully completing a task. Finally, the experts end their review by presenting recommendations and potential solutions to overcome the usability issues.

Read more:
Heuristic evaluations and expert reviews by usability.gov
Expert review: alternative to usability testing? by User Intelligence


3. Quick & Dirty Tests

As the name already suggests, a quick & dirty usability test can be run quite simply and quickly and helps you review the usability or design of your product or an individual feature you are about to add to your existing product. At first, you can conduct a quick test yourself, if you critically review your own work. To do so, e.g. open your prototype and “naively” click through it like you would do when using it for the very first time. As this unbiased view is difficult to achieve, in a next step you can ask one of your colleagues, whether or not she is involved in the development process, to have a look at your prototype for a few minutes. Show her your work and see what happens. Pay particular attention to

  • how your “test user” responds to your prototype,
  • what she does, and
  • what she expects to do with your product-to-be.

Doing so, you will receive a first internal review, which will usually help you to detect major inconsistencies, errors and even to get positive feedback. It’s up to you to decide whether you want to sit next to your tester while she is examining your product, record the test session, simply make a few notes or whether you leave it to your tester to give you a brief feedback (either in writing or in a conversation). For this type of test, you will need no test lab, no expensive tools and no recruiting overhead. There is no limit in running quick tests as they are neither cost-intensive nor time consuming. But even one session can help you to improve your work and to have a better product at the end of the development process.

Read more:
Quick and dirty usability testing by Leah Buley


4. Guerrilla Usability Testing

Guerrilla usability tests are similar to classical usability tests, but require no test lab and make do with a minimalistic set-up. The idea is to cut down overhead and required time by catching testers of your target audience where they are likely to be found, e.g. in a coffee shop, a store or at an event. This works especially well when designing for the mass market, but can work with more special target groups depending on the context, and, of course, requires a more or less mobile set-up, often consisting of just a notebook with a screenrecording software or a video camera to tape the testers. Guerrilla tests are more informal than classical usability tests and aim at quick feedback, allowing for frequent testing throughout the entire development process. Guerrilla testing can be used to test almost anything – from sketched concepts to fully interactive prototypes to physical products – and works quite well, if you want to quickly validate your current work.

Depending on the product in question, you can even run guerrilla tests with coworkers or colleagues at the cafeteria. The test itself follows classical testing: The test facilitator will sit next to the participant, explain the task(s) and take notes (or have an assistant take notes) while the test user talks aloud as she clicks through the site. Like quick tests, this comparatively inexpensive way of testing can be used anytime throughout the whole development process and is an easy way to receive direct feedback on your product.

Read more:
The art of guerrilla usability testing by David Peter Simon
Changing the Guardian through guerrilla usability testing by Martin Belam


5. Contextual Interview

While the previously mentioned usability testing methods can be used regardless of whether you are testing a pre-release prototype or an existing product, the contextual interview is most helpful when doing research on an existing website or app as basis for a re-design project or a planned feature addition. It is therefore more of a user research method than an evaluation method, but may be used as such or combined. During a contextual interview, the test users remain in their natural environment while the researcher sits next to them and watches them work with the product. Unlike in formal usability tests, the test subjects usually do not complete carefully designed tasks, but rather use the product in the context of their everyday tasks, e.g. while at work. This enables the researcher to learns about things like

  • the general use of the product,
  • the issues the tester faces,
  • which technical equipment is used,
  • how long it takes to complete a task,
  • the environment the user works in, or
  • the actual purpose for using the product.

The researcher can choose to only observe or ask additional questions to clarify or complement his observations. Of course, you may combine the contextual interview with traditional user tasks to be able to compare test sessions more easily, allowing the user to try out some things on his own and to complete specific tasks.

Read more:
Contextual Inquiry by usabilitynet.org
Contextual interview by usability.gov


Happy Testing!

How To … Play a Sound?

Whether it’s a game, a music app or the alert function of your favorite social media app – sounds are an essential part of modern applications and can significantly affect user experience! That’s why with Pidoco’s Extended Interactions, you can now add sounds to your prototypes.

A while ago, I created a mobile prototype called “Fortune & Destiny”, which is an interactive dice and cards game. To create a realistic playing atmosphere, I added a few sounds. Below, I’ll show you how its done…


Step 1: Select a trigger element and add an interaction

To play a sound in your prototype, you need an element that will trigger the sound. For example, you may want to play a sound when a certain button is clicked. That button will be the trigger element, to which you will need to attach the sound. You can also tie a sound to a page or a layer.

So, first find and select the trigger element. In this example I attached the sound to a page. This is done via the Context Menu of the trigger element using the “New interaction” option in the “Interactions” tab. Clicking “New interaction” opens the Interaction Dialog, where I specified the interaction “When the user shakes the device then play a sound” using the two dropdown menus at the top.

Adding a sound interaction to a page using the context menu

Adding a sound interaction to a page using the context menu


Step 2: Upload a sound

Once the sound interaction has been added to the trigger element, we can upload the sound file.  To do so, click on the “Upload” button to select the desired sound file from your local or external drive. (Please note: only MP3 files are supported.)

Uploading a sound from the local drive via the Interaction Dialog

Uploading a sound from the local drive via the Interaction Dialog


Step 3: Choose a sound and save

Right below the “Upload” field, you will see a list of all sound files you have uploaded. Now all you have to do is choose the file to be played (to preview the sound, simply press the “Play” icon next to the file), set the duration and decide if there should be a delay. At the end, don’t forget to click “save”.

Choosing a sound via the Interaction Dialog

Choosing a sound via the Interaction Dialog


By the way, if you want to apply the same sound to multiple trigger elements, you don’t have to repeat all steps! Simply select the first trigger element that already has the sound attached as well as any other elements that should use the same sound, open the Context Menu, and select the Interactions tab. Here you will find the “All” icon to apply this interaction to all the selected elements.

Apply this interaction to multiple elements

Applying an existing interaction to multiple elements


That’s it! You have successfully created a prototype that can play sounds!

Do you need help with adding sounds to your prototypes? Just send us a message via support@pidoco.com or FacebookTwitter or Google+.


Happy Prototyping!

Pidoco gets a facelift

At Pidoco we constantly work on improving our tool and listen to your feedback in order to deliver you a great prototyping experience! That is the reason why we are bringing you some new exciting features today, including a bigger canvas, an improved toolbar, and a modernized look! With this facelift, prototyping will be even easier and faster. So what is new?

Brighter UI, Less Clutter And A Flat Design
First off, we have given the Page View section of Pidoco a brighter, more modern look by removing unnecessary UI elements and flattening the design. This makes it easier to navigate Pidoco and makes work more comfortable for your eyes because there’s less clutter. In fact, the entire look of our tool has been simplified, which will help you to find things more easily.

The new UI allowing for a more efficient prototyping

The new, cleaner UI features a flat design and allows for more efficient prototyping


More Work Space
Many of you work on large screens, but some do not. So we took a look at how to improve the prototyping experience of those working on smaller screens. As a result, you can now collapse the My Global Layers and My Interactions panel in order to enlarge the work space of your Editing Panel. Just click on the little arrow to hide or unhide the panel. While we were at it, we also shortened the names of the panels to “Layers” and “Interactions”, respectively.

Collapsible layers panel

Collapsing the layers panel increases work space


Improved Usability
With the design changes we have also given our Toolbar a makeover. Now it wows with shiny new buttons that are both larger and easier to click and feature a text label that makes their functions more obvious, saving you time looking for the right button. We also updated our icons to make them even more intuitive. The most important buttons “New” and “Simulate” have been highlighted for you in green and orange.

The new toolbar including new icons

The new toolbar including new icons and text labels as well as highlighting


Smart User Support
Pidoco offers a number of great features that are not always obvious at first glance. To make them more easily accessible to you we have improved our tooltips that give you hints on working with Pidoco. When editing your prototype, little hints and tricks will be displayed in the top right corner to facilitate your work with Pidoco. But if you already know what you are doing and don’t need those hints anymore, your can now turn them off.

Hideable help section

Hints make “hidden features” more easily accessible but can be turned off if not required


So, what do you think about the design update? Drop us a comment here or via Facebook, Twitter or Google+ or contact us via support@pidoco.com. We are looking forward to hearing from you!

By the way, did you know that you can submit your feedback and ideas in the Pidoco Forum, too? Simply make a suggestion and all other users can vote on it. Our developers will comment and keep you updated on the status of your suggestions.

Happy Prototyping!

You are not a Pidoco user, yet? Why not register for a free 31-day trial today!

How To … Prototype Location-Based Services?

I have already introduced you to sensors embedded in mobile devices, of which GPS and accelerometer are most commonly used. Especially location-based services (short LBS) make use of them. One example for LBS is an interactive city guide, but you may know some of these, too:

  • “Find Me” or “Where am I” services (finding nearby restaurants, shops, ATM or apps detecting your current position etc.),
  • social networking services (add location to uploaded picture or in your chat),
  • locate services (integrated in emergency services),
  • interactive location-based games.

In this post I want to show you how to prototype location-based services.


Creating a prototype for a location-based service

To show you how to create an LBS prototype, I will use an example prototype for a smartphone app which I call “Berlin UX Guide”. It contains a map of Berlin and various tourist and UX hotspots marked by little icons (sights). You can click on every sight on the map to open a little pop-up with information about the respective sight. In addition, if you have your GPS activated and walk through Berlin with the prototype, the pop-ups will also automatically appear when you come near a sight. This is the LBS part of my prototype. So, let’s have a look at how to create this location-based prototype…


Step 1: Create the sight pages

As always, we start our prototype by building the basics including the various pages. In my example, the prototype basically consists of a map page and a little “pop-up info page” for each sight on the map (sight pages). The little pop-up pages will be displayed as an overlay on top of the map page. In addition, there’s also a start page when the app opens up, which I will not describe in further detail here. Here’s an overview of the pages we need:

Screenflow of "Berlin UX Guide"

Screenflow of “Berlin UX Guide”

Let’s start with the the first sight page. Create a new page and adjust the page size via the Context Menu of the page using the width and height parameters in the Properties tab. Pick a size that is smaller than the screen of the smartphone but large enough to disply the necessary information. I have chosen a width of 300 px and height of 240 px.

Context Menu of a Page to Edit the Properties

Editing the page properties using the context menu

Now we can add content to the overlay, e.g.

  • name of the sight,
  • address,
  • short description,
  • rating scale, and
  • contact details (website, telephone number as well as links to social media).

Don’t forget to add an option for closing the pop-up, e.g. an “X” at the top right. Voilà, the first sight page is done! Repeat for every sight you wish to include.


Step 2: Create the map page

Now we need to build the map page, which consists of a map image and icons marking the respective sights. First, we need to create a new page. Use the “smartphone landscape” option to generate a page suitable for a full smartphone screen or manually reset the page size if necessary. Next, we need the map itself. I’ve taken a screenshot showing a map of Berlin’s city center in this example. Upload the map via the image upload dialog and insert it as a page background.

Upload your image to the prototype

Upload your image to the prototype


Select your background image via the Context Menu of the page

Select your background image via the context menu of the page


Now add icons to mark the location of the individual sights. In my example a star represents a tourist sight and a flame an UX hotspot. Both icons can be found in the Stencil Palette under Symbols (expand to see all).

Icons of the "Berlin UX Guide"

Icons of the “Berlin UX Guide”

Your finished map page may look like this:

Map of Berlin including already created sights and UX hotspots (simulation view)

Map of Berlin including various tourist sights and UX hotspots (simulation view)


Step 3: Link overlay and map

Now we have all the basic building blocks in place, so it’s time to link our pages up. To display the sight pages as overlays we will use the “show overlay” reaction. For each sight icon add the following interaction: Open the Context Menu of the icon and click on “Add interaction”. In the Interaction Dialog, select “taps” as the user action and “show overlay” as the system reaction. Then select the desired sight page from the “Content to show” dropdown as the content to be shown. The selected sight page will be shown as an overlay centered on top of the map page in the simulation.

Add an Interaction to Show Overlay

Adding a “show overlay” interaction to a prototype element using the interaction dialog

Here’s what the simulation will look like:

The map of the "Berlin UX Guide" and an opened overlay for the Brandenburger Tor (simulation view)

The map of the “Berlin UX Guide” and an opened overlay for the Brandenburger Tor (simulation view)

To allow the overlays to be closed, add the following interaction to the “X” on each sight page: When the user “taps” then “hide overlay.

Adding a "hide overlay" interaction using the interaction dialog

Adding a “hide overlay” interaction using the interaction dialog


Step 4: Add GPS

Finally, let’s integrate the location data so that the pop-ups can be triggered when the user arrives at a sight. To do so, go back to the map page and add the following interaction to each sight icon: Open the Interaction Dialog and select “changes the location” as the user action. Location changes can be triggered when the user enters or leaves a certain spot. As the overlay should appear when the user approaches the sight, select the trigger “enters” and zoom into the map until you have found the correct location. Mark the location by clicking on the map and setting the location radius with your mouse. Then add the reaction “show overlay” and select the desired sight page from the “Content to show” dropdown. To notify the user when sight information is displayed, let’s also add a “vibrate” reaction. (Here: I have chosen a duration of two seconds.)

Add two reactions to a tap action

Add two reactions to a tap action

That’s it! You have successfully created an interactive prototype that can simulate location-based services!

Do you need help with adding location-based interactions to your prototypes? Just send us a message via support@pidoco.com or Facebook and Twitter.


Contributing to the Berlin UX Guide

The Berlin UX Guide is designed as a community project uniting city explorers and UX experts. If you want to add a sight or new UX hotspot send an email to support@pidoco.com including your Pidoco user name. If you don’t have a Pidoco account yet, just register at pidoco.com/free. We will send you a collaboration invitation which will allow you to add new pages to this prototype.

Start Screen of the Berlin UX Guide

Start screen of the “Berlin UX Guide”



PS: If you would like to read more about location-based services for mobile devices and, have a look here:

“A New Map Gives New Yorkers the Power to Report Traffic Hazards” by Sarah Goodyear

“Advanced Location-Based Technologies and Services” by Hassan A. Karimi

“Location-based Services” in Porlaris Wireless”

Top 10 Challenges in Designing Mobile Apps

We all know this … we go through the app store looking for cool new and promising apps, download them and after a few seconds we recognize some odd bugs or become annoyed at the handling, always tap on the wrong button or simply don’t find what we are looking for. Some apps upset us even more as they are absolute energy guzzlers. We then often ask ourselves: Why is this app so complicated and whatever happened to usability? Well, to be honest, I don’t know the answer! But app users are merciless and fierce customers!

So if you are about to create your own app, you should avoid such pitfalls to prevent dissatisfied and unhappy users! That is why I created a Top 10 list of the challenges you might face when designing a mobile app and how you can overcome them. To provide you some real insights on the developers’ life, I sat down with Katja and Lars of the Berlin based startup bytecombo to talk about the challenges they faced when releasing their brand new game called “Bronko Blue, the Kitten Copter”.


1. Have an Idea!

Every mobile app starts with an idea – either a vague or already a concrete one. I’m not telling you anything new, but most of the greatest ideas just appear out of the blue. As people use apps to make their life a little more comfortable, productive and pleasant, the really big challenge is to have an idea that is awesome, innovative, and creative or exactly the app everyone was longing for. When already thinking about the designing and realization, remember that less is more! A fast, responsive, reliable and structured app working well in such an environment is better than a nice looking, super aesthetic, but instable or buggy app.

Nevertheless, before you start creating your app, you need to think of another very important aspect: your competitors! Without going into detail and just to drop some important keywords, try to answer the following questions:

  • Who is my target group?
  •  What is the aim of the app?
  • Which function does the app have?
  • Is the app absolutely new or does it improve an existing one or is it an add-on etc.? (Analyze the innovation level, do some research!)
  • Is there a market for the app? (Do a competitor and/or benchmark analysis or user survey!)
  • What will be the price of the app? (Analyze the profitability!)
  • Where will the app be sold? (Selling via the most popular app stores or via extra channels and other/own websites?)


 2. Know Your Target Group!

The main purpose of your app is to satisfy the users’ needs. This is key to get good reviews and ratings as well as high adoption rates resulting in numerous downloads. So detecting the right target group is essential! To do so, do not only rely on different analyses (see above), but also consider the behavior and knowledge of your potential users and the important skills necessary to use the app. Here potential questions are:

  • Which skills are necessary to use the app and is it necessary to have previous knowledge to use the app?
  • Are the potential app users novices, experienced or expert users?
  • What are your users usually up to?
  • Where do they get information on new mobile apps from?

Furthermore, your target group defines e.g. your business model, distribution channel, advertising as well as marketing strategy! As you can see it is of utmost importance to know your users!


3. Mind the Costs!

Besides your idea and knowing your potential users, there is one more essential challenge at the beginning: the costs. Because your financial as well as human resources define the frame of your app and hence both extend and duration of the realization process. Best here is to

  • have a clear financial concept and/or think of alternatives, such as private or crowd funding etc.,
  • define project members,
  • make a project plan,
  • create sub-projects (if possible),
  • set milestones and goals, and
  • define a budget – for the development and advertising of the app as well as for employees and unforeseen expenses.

You may find it helpful to consider lean development approaches to make sure you don’t over-develop your app at the beginning, but rather start with a “minimum viable product” that can be quickly developed.

To avoid high development costs, allow for a proper design or concept phase prior to the actual programming of your app. Sketching and deliberating your new app is the key success factor, which can be simplified with the use of rapid prototyping tools. Because prototyping is relatively inexpensive and allows you to optimize your app concept without having to invest in coding, you will start with a verified draft of your app, that “only” needs to be programmed, making later improvements redundant. This preserves everybody’s nerves, improves the relationship between programmers and designers and most of all saves precious time and money!

Costs and profitability go hand in hand. So you should also have a clear idea on how you offer your app:

  • Will it be an app that is available for free? Will there be advertising?
  • Will there be a free trial version that needs to be upgraded or that has extra features you can add by paying for them?
  • Will it be an app that will be sold for a small amount of money?
The Successful Way From an Idea to the Successful Release of the App

Prototyping: A safer way for getting from an idea to the successful release of an app


4. Detect Users’ Requirements!

After this first planning phase the design process can be initiated. Probably the best way to start off is by sketching your app ideas. Using rapid prototyping for this will breathe life into your idea. A huge advantage is that you immediately get an impression of what you are creating and by using clickable wireframes you will get a very precise idea of how your app will work and be handled. As the biggest concern is to satisfy the users, it is really important to detect the users’ requirements and to understand what the users need and want.

With the help of a prototyping tool you get the chance to create realistic, interactive prototypes that not only look, but also behave like your “real” app. As you can simulate your app prototypes, you can test your app prior the actual implementation  with test users or other collaborators. During this iterative designing process, you get immediate feedback. Both analyzing the user requirements and optimizing your app according to the users’ needs go hand in hand resulting in a great user experience.


5. Use Eye Candy!

Another challenge is to create a modern app fitting to today’s technological demands including a user-friendly handling, comprehensible usability and of course a pleasant experience. Therefore my simple advice here is: Use eye candy! This might sound a bit odd, but it is not self-evident! Screens are still in use, although they are not en vogue any more. Instead we have to internalize that transitions, animations and responsive design are the Zeitgeist. Swipe and pinch gestures support this easy and intuitive navigation behavior and make it an absolute must-have.

When embedding features and including elements, don’t forget about the user! Especially, when it comes to day to day use of apps. Because we need our fingers to do so – most of all our thumbs as recent studies show. This makes the so called “thumb zone” quite important, i.e. the part of the screen that you can easily reach with your thumb when holding the device in one hand. Since our fingers only have a limited span, you should keep this in mind and choose the position of important elements wisely.

“The Thumb Zone” of a mobile phone (based on the image by Oliver McGough in “Designing for Thumbs – The Thumb Zone”)

“The Thumb Zone” of a mobile phone (based on the image by Oliver McGough in “Designing for Thumbs – The Thumb Zone”)


Finally, there’s a huge difference between designing an application for a classic desktop computer that is operated via keyboard and mouse and designing an app for a mobile phone or tablet computer: Since the latter are usually operated via touch screens using our fingers, interactive elements such as buttons, links or icons must not only look nice but also be large enough so that the user can tap on them. So, especially on smartphone, the screen space must be used wisely.


6. Make it interactive!

Similar to the transitions and responsive design, interactions are a must-have! At the same time, they also constitute a time-consuming challenge during the design process and hence should be considered in advance! The main reason is that mobile devices are full of high-tech sensors and hence offer a wide range of opportunities. However, this is a great chance, as well to create a unique app. So make use of the sensors! Because interactions do not only include tap and swipe gestures, consider overlays that can present additional information or buttons that support various click options. Comprehensive “app experience” might be a good name in this context. So my advice here is to consider using more tangible interactions that  allow your app to respond to the environment, position and direction of the mobile device, such as shaking, flipping and tilting the device to trigger an action or to include GPS-based position data to overcome this challenge.


7. Make it clear!

An app should be self-explanatory since users don’t want to think when using an app. Furthermore, you should keep in mind that the functioning or handling of the app might be clear to you (as the inventor), but this doesn’t necessarily apply to your users. The use of a simple and clear structure can help deal with this challenge. Also don’t forget about intuitive handling! Most of all, give brief instructions on what to do and how it is done. If necessary, embed or link tutorials giving additional help and support on certain tasks. Icons and thumbnails are a nice way to present information in a very compact manner, too. But be cautious! Too many icons can confuse your user, especially if you introduce icons that don’t correspond to native UI elements (such as a Play button, left/right arrows, envelope etc.). To avoid confusion, you can have a look at the respective operating systems (iOS, Android or Windows) or relevant and most downloaded apps (e.g. different social media networks) to see what symbols they use. Platforms like Appli’s iOS, Google’s Android or Microsoft’s Windows Phone also provide more or less detailed UI guidelines for app developers that will help you choose UI patterns your users will already be familiar with.


8. Create Empathy

Curiosity, creativity and loads of innovative potential keep the ecosystem of mobile apps on a constantly high level and make the market change rapidly. That is why a strong and healthy relationship to your users is essential. It’s all about the positive first impression and wellbeing. This is absolutely essential to relationships – either for people or for products. So make the user feel comfortable. You can’t only achieve this by having an app brimming over with user friendly, interactive features, but also with e.g. a recognizable and memorable design, colors, logo or typical fonts. Maybe answering the following questions will help you as well:

  • What are the core UI elements of your app?
  • Are there already user stories you can rely on?
  • How and where do I attract potential app users?

You may and even should have a look at your competitors and the overall market, not in order to copy your competitors’ app design or structure, but to create something new and unique without reinventing the wheel. Most of all: you want happy and loyal users. Being there for them, listening to their problems when using your app (be aware that there’s always a tiny mistake or bug and someone who finds it) and providing immediate help, are the key to your users’ hearts. It’s all about your users’ empathy and you need to win it!


9. Performance vs. Battery Lifetime

When designing an app, layout and structure are only one side of the coin. But its performance and the energy used for this are the other side. The challenge here is not only to design a nice app; it’s about having an app that runs smoothly without any bugs and which is not an annoying energy guzzler. But often, performance and feasibility of the app are believed to be the exclusive responsibility of the developers. That is the main reason for their late consideration during the design process. Additionally, the performance of an app on a mobile device is perceived differently by many users, but in general it’s based on loading times and behaviors, if and how smooth transitions and animations run, the amount of errors, bugs and crashes occurring etc. Asking your developers to join your prototyping efforts or to set up a beta version to run previous tests with test users will help avoid such trouble.

Another aspect are the devices themselves! Your new app might function well on the newest mobile device, but there are also users with older devices out there! Too many visual effects, integrated sensors, images or anything else requiring high temporary buffer and flooding your cache might lead to a bad or disruptive performance.


 10. Different Devices and Different Operating Systems

As already mentioned, your users will probably have many different devices with different screen sizes, especially if you are designing for multiple platforms. So designing an app for the latest device only can be a huge mistake. Apps should run on as many devices as possible – at least if you want to attract as many users as possible. Creating an app suitable for every device is a really big challenge then, too. Because the countless mobile devices bear several restrictions and limitations based on system requirements, embedded technologies, different sizes, pixel intensities, screen dimensions etc. Although this sounds like an odd buzzword, responsive design can make life a little easier. Considering this, screens get more liquidity and thus can be adjusted to the various screens and formats of the devices.

In contrast to desktop or even tablet computers, the screen space of mobile phones is rather limited. The smaller dimensions of smartphones pose a particular challenge to mobile designers, since the screen footprint will usually force them to cut down on features and sometimes even content as well as make smart choices about navigation. Especially when developing a mobile app to go with an existing website or web app, this can be challenging as the natural impulse is to require the mobile app to have the same functions and options.

Finally, it’s not only about the devices themselves, but also about their operating systems. The three main systems, iOS, Windows or Android, have their very own patterns and UI objects and regular updates have become “normal”. So be prepared, there will be bugs requiring fixing on a regular basis. Actually this might be the biggest and most time-consuming challenge in this context.

In terms of such technical challenges a proper testing phase or even a Beta version may be helpful and should be considered.


All in all, it’s about making your mobile app unique, appealing, effective, and pleasurable and of course memorable! It might be quite a difficult and even long challenge. There might be some obstacles in your way to the launch as well, but as we all know there are many mobile apps available out there, which are both real user magnets and incredible success stories.

Challenges of Mobile Designing and Ways to Create Unique Apps for Happy Users

Challenges of mobile app design and how to create unique apps that make users happy


To put the above-mentioned challenges into perspective, I spoke to the  founders of bytecombo, Katja and Lars, who can tell tales about the challenges emerging on the way to the release of a mobile app. On July 26, 2014 their new online game called “Bronko Blue, the Kitten Copter” was launched on the international games market. This indie game is about a cute cat called “Bronko” who is totally in love with his balls of wool. Every morning he sits down to count his wooly treasures, but one night some of the balls get blown away and the horrible trouble starts all over the next day. Suspecting the mean cows he makes a plan to get back his riches. So, in the full version, this side scroller is a challenging journey during which Bronko has to fight cows and windmills, ram or shoot stones and fly through the different seasons of the year.

"Bronko Blue, the Kitten Copter"

“Bronko Blue, the Kitten Copter”

Here’s what Katja and Lars told me about the challenges they experienced in the past months:

How do you feel now that “Bronko” frolics through the virtual worlds?

Katja: Relieved, excited and frightened all at the same time. Relieved because it took us longer than expected to finish the game. Excited to know how people will like the game and what will become of “Bronko”. Frightened that “Bronko” might not be found in the mass of games or even worse people won’t like it.


From the very first idea to the release last Sunday, how long have you been working on “Bronko“?

Lars: At the beginning we just wanted to make a small game similar to “Flappy Bird” and planned a developing phase of three months. But when working on it, we had so many new ideas every day that time flew by.

Katja: At the end we worked for one year, we spent two days a week on it, to finish and publish the game on several platforms. While the actual programming was done within three to four months, steps like marketing, fixing cross platform issues, fine-tuning and optimizing the concept took a lot more time than expected.


Let’s talk about the challenges. Which were the biggest challenges for you during the design and release process?

Lars: There have been a lot of challenges along the way. For example, there’s the cross platform issue. Choosing the right technology for game development was one of the hardest steps since there are plenty of options. We decided to develop cross platform using a language called HAXE and on top of it the frameworks OpenFL and HaxeFlixel. In theory to develop on a cross platform is a perfect solution. You develop it only once and then publish to several platforms. In reality we had a lot of trouble to get the game running stable on each platform. It took us at least as long to fix the cross platform problems as it took us to develop the actual game.

Katja: Another challenge is or was keeping calm. Like “Bronko” we had some impediments on our way. But to keep calm and get on with it sometimes proved to be very hard. As we are no marketing experts at all, this was another challenge for us. So we had to do a lot of research on what to do and when. Especially with a small budget. Fortunately there are a lot of resources out there on the internet.

Lars: Another difficulty was getting proper feedback. It proved to be a lot more difficult to get feedback while developing than expected. Especially from friends and family as people are very nice about things and criticism isn’t always very specific. And of course from other gamers and developers as you can get useful feedback like bug reports, but more often it’s only a negative or positive rating which doesn’t help a lot.


Which advice would you give to other developers planning to create and release a game for mobile devices?

Katja: Since you can’t be sure if you will finally succeed, you should love what you are doing. Love playing games, love being creative, love implementing and even love promoting it. What you need is patience and in the best case a plan “B”, for example for the funding.


Now that “Bronko“ has been released, do you already have new games or app ideas?

Lars: Yes, actually we do have a lot of new ideas. Next we will create four prototypes for new games, which will be smaller ones than “Bronko Blue”.


Looking back one year from today, would you do it again?

Katja: Definitely, as many problems as occurred as many new experience we gathered.

Lars: And most of all, we had so much fun on the way to the release.


About bytecombo:

The Berlin based startup was founded in 2013 by Katja and Lars, who are in love with licorice, coffee, and good mobile games. They are passionate about innovative, small indie games with nice and simple graphics. And that’s exactly what the likeable duo wants to develop.


If you are curious about the Bronko’s adventures have a look at their website: bytecombo.com, stop by at the Facebook profile or download the game in the respectively well-known stores and go on crazy adventures with the cute little cat. To get a little impression of the game, here’s the latest trailer:

For more detailed reading on the different challenges of designing a mobile application check out the following links to related articles, blog posts and books.


“Designing for Thumbs – The Thumb Zone” by Oliver McGough

“Designing Mobile Apps, Where to Start?” by António Pratas

“What Does it Take to be a Mobile Designer Today?” by Sergio Nouvel

“User Experience is Integral to Winning App Design” by Rahul Varshneya

“5 Advanced Mobile Web Design Techniques You’ve Probably Never Seen Before” by David Fay

“Mobile: Native Apps, Web Apps, and Hybrid Apps” by Raluca Budiu

“Seven Guidelines For Designing High-Performance Mobile User Experiences” by Ivo Weevres

“How to design a mobile app.” by Alexander Kirov



“A Project Guide to UX Design: For user experience designers in the field or in the making” by Russ Unger, Carolyn Chandler

“Interactive Design: An Introduction to the Theory and Application of User-Centered Design” by Andy Pratt, Jason Nunes

“Smashing UX Design: Foundations for Designing Online User Experiences” by Jesmond Allen, James Chudley

“Designing Apps for Success: Developing Consistent App Design Practices” by Matthew David, Chris Murman

“The UX Book: Process and Guidelines for Ensuring a Quality User Experience” by Rex Hartson, Pardha S. Pyla


How To … Simulate Device Motions in Your Prototypes

Modern mobile devices, e.g. your smartphone, are quite powerful. They are little computers with both a high capacity and a high sensitivity. One might even say they have razor-sharp senses including the best eyes and ears and a brilliant sense of balance -all because of the multiple sensors, which are embedded in a mobile device. It is quite impressive how many sensors some devices include nowadays: Environment sensors like the barometer, thermometer or light and proximity sensor measure different properties and environmental parameters of your mobile device, motion sensors such as the accelerometer, gravity sensor or gyroscope measure acceleration and rotational forces, while position and orientation sensors measure the device’s physical position using magnetometers, GPS or compass features. Further sensors are of course the microphone, the camera, and the touch screen, which probably are the best-known sensors, not to forget sensors like the finger print scanner, as well as WLAN and GSM antennas and finally Near Field Communication (NFC) and Bluetooth. The picture below provides a nice overview of the various sensors.

Overview of common mobile device sensors

Overview of common mobile device sensors. Source: Inter Free Press via Wikimedia Commons, licensed under Creative Commons Attribution 2.0 Generic


These sensors enable a wide range of great functions, e.g. light and proximity sensors tell the device to lock the screen when you hold it to your ear during a phone call in order to prevent accidental touch gestures, while the accelerometer is used when you turn your mobile device and the screen orientation changes from portrait to landscape. But these sensors also provide a host of novel opportunities as they can be applied to a great variety of domains, such as healthcare, safety or transportation and social networks. Furthermore, these sensors are useful in improving the user interface, in providing LBS and helping to detect and use environmental data. Examples include fitness apps that make use of the GPS sensor to track your route or apps tracking eye movement across the display using the built-in camera. Some apps using sensors can be real life-savers: Patients suffering from Alzheimer’s disease or dementia can use apps that track their locations via GPS and inform family members when they leave a certain route.

It is only natural that with so many options available, developers want to make use of them. And if you can create such beautiful apps, your prototypes should also be able to simulate them. So, why not use these opportunities in your prototypes? Here’s how!


Creating a prototype that reacts to device motions

To demonstrate how to create a prototype that can react to device motions I created a small interactive gaming prototype for mobile phones, which I called “Fortune & Destiny”*. There are two modes: Dice of Fortune (which will give you a score) and Dice of Destiny (which will give you an answer to a question). Shaking or tilting the phone will roll the dice and present you a result after giving a signal (vibration). So here is how it goes …

Preview of my prototype "Fortune & Destiny"

Preview of example prototype “Fortune & Destiny” (in landscape Format)

Step 1: Create the prototype pages

As usual, we start by building a basic prototype that includes the various pages used in the application. In my example I need a start page (actually, I have created two pages – one for portrait and one for landscape orientation), a page to choose between the two modes, an instructions page for each mode, and a page showing me the result (actually, I need one page per result I wish to simulate).

Now that we have created all the pages (you don’t know how to create overlays anymore, have a look here), let’s add some interactions that will use some the sensors in our mobile device. In this prototype we will mostly make use of the accelerometer.

Screenflow of "Fortune & Destiny"

Screenflow of “Fortune & Destiny”

Step 2: Add a “Turn Device” interaction

Interactions in Pidoco always consists of a pair of User Action and System Reaction (check out our blog post on Extended Interactions or have a look at our Glossary for more information). Let’s start adding the first device motion interaction. First, let’s connect the two start pages (see screenflow above) with a “turns the device” interaction so that the user can switch between the portrait and landscape view by turning his device. To do so, open the Context Menu of the portrait start page, select the Interactions tab, click on Add Interaction and the Interaction Dialog will open.

Accessing the page context menu via the Breadcrumb Navigation

Accessing the page context menu via the Breadcrumb Navigation

In the left column, choose “turns the device” from the dropdown and select “Turns to Landscape”. To have the landscape start page displayed as a result, pick “show page” as a reaction in the right column (then) and select the appropriate page from the Page/URL dropdown. You can add an animation and maybe also a delay to it. Here, I decided on “slide in from left”. That’s it. Now we can proceed with the next interaction.

Interaction Dialog for the User Action "Turns the Device"

Interaction Dialog showing the settings for a “Turns the Device” interaction


Step 3: Add “Shake Device” interactions

As this prototype is about dice, we want to simulate real dice rolling! Let’s make it a bit advanced and create a reaction chain. To do so, shaking the mobile device seems to be a good way. At first, we start with the sound of rolling dice, which I recorded in advance. Again, open the Interaction Dialog of the page from where the user will roll the dice and select “Shakes the device” as the trigger action and define the Intensity (lightly, medium or heavily) and shaking Duration the user needs to apply to trigger a reaction. I chose a light shaking intensity for only one second to make it easy for the user. Now add the System Reaction. As an example, I decided on three reactions: the sound of dice, a vibration signal (vibration) for the shake feedback and the display of the result page. To add a sound, select “play a sound” and upload it as an MP3 from the respective drive by selecting it and clicking on the upload button (please mind copyright and make sure you own the rights of it!). If only part of the sound should be played, define the Duration. If you leave this field empty, the entire sound is played, which is what I have chosen in this case.

Add the System Reaction "Play A Sound"

Interaction Dialog showing how to add a “Play A Sound” system reaction

To add a vibration signal select “vibrate” in the reaction dropdown (“then“) and define both the Duration (here: two seconds) and, if desired, a Delay. To display the result page, select “show page”, select the respective page and define any desired animation, additional option or potential delay.

Add the System Reactions "Vibrate" and "Show Page"

Interaction Dialog showing how to add “Vibrate” and “Show Page” system reactions

I did this for all my “Dice of Fortune” pages. Furthermore, I decided to use delays to create a sequence of System Reactions and to get a more realistic feeling.


Step 4: Add a “Tilt Device” interaction

For the “Dice of Destiny” pages, another function of the accelerometer can be used – tilting the mobile device to make it look more like cards. To add this Interaction to your prototype page, repeat the steps described in Step 3, but select “tilts the device” as the trigger and define the tilting direction (left, right, up and/or down), the movement to be made (forward and/or backward) and finally the tilt angle. For the System Reaction, I have selected a vibration and the display of a result page. I applied these interactions to all of my “Dice of Destiny” pages and varied the parameters (tilt angle, movement, and direction) to give you an impression of all the potential setting you can chose. If you play cards, you usually throw or tilt your cards in different angles and directions anyways, so these variants make the simulation more realistic. Finally, you may have already noticed I added a sound again. This time, it is the sound of cards, which I had previously recorded.

Add the Multiple System Reactions to the User Action "Tilts the Device"

Interaction Dialog showing a set of multiple System Reactions to the User Action “Tilts the Device”


That’s it! You have successfully created an interactive prototype that can simulate how the app reacts to device motions! Do you need help with adding device motions to your prototypes? Just send us a message via support@pidoco.com or Facebook and Twitter.


Happy Prototyping!


In my next column I will make use of even more sensors and show you how to integrate location data and maps (GPS) into your prototypes.


PS: If you would like to read more about sensors in mobile devices and about apps used in healthcare, have a look here:

Mobile Phone Sensors in Health Applications by E. Stankevich, I. Paramonov, I. Timofeev

Improving Health Care Through Mobile Medical Devices and Sensors by D. M. West

Sensors Overview (in the API Guides) by the Android Developers



* Please note: To view this prototype you need to be logged in to your Pidoco account. You can also test this mobile prototype on your mobile device.

How To … Create Touch Gestures and Screen Transitions?

Can you think of an app that does not have any touch gestures? I cannot and actually think that they are essential nowadays. With Pidoco’s new Extended Interactions you can now add touch gestures, screen transitions, device movements, location data and many more to your prototypes. In this series of blog posts I will guide you through how to use our new features in the next weeks. Today I want to start off by showing you how to add touch gestures and screen transitions to your prototypes. So here is how it goes …

To make this how-to a little more tangible, we will go through an example. Let’s imagine we want to build a prototype of a mobile app with which you can view videos and pictures you have taken – a digital photo album. The pictures and videos are grouped in galleries. To look at them, you can open the galleries by tapping on them. An overlay opens and you can have a look at the content by clicking or swiping through it – like a film strip. The prototype could look like the sample prototype below, which I will call “My Gallery” *.

Preview of my Gallery App

Preview of “My Gallery”

Let’s see how to create such an interactive prototype!


Step 1: Create your mobile prototype.

Before we can start adding rich interactions to our prototype, we need a prototype. First, create a new prototype. Then create the different main pages and add the required elements to them. Then add links between the screens as usual. If elements should be reused on several pages (like the header and footer in my app), use the Global Layer function.

Screenflow of "My Gallery"

Screenflow of “My Gallery”

Now that we have our basic prototype set up, we can start adding more interactions.


Step 2: Add touch gestures

Touch gestures can easily be added to various elements like stencils or entire pages using the Interaction Dialog. Let’s commence with the first interaction by creating a tap action to link the element “My Pictures” with the corresponding page. To do so, open the Context Menu of the rectangle “My Pictures”, click on the “Interactions” tab and finally on “Add Interaction“. This will open the Interaction Dialog. In the Interaction Dialog select “taps” as the interaction trigger and define the number of fingers to be used. Then choose “show page” as the system reaction and select the target page to link to (here:“My Pictures”). Under “Options” you can define how the next page will be displayed. “Instant link” lets you go to the next page without reloading and can be used for AJAX-style simulations.

To make it look a little nicer, we can add an Animation. To do so, simply select the desired animation from the dropdown. I have chosen “Slide in from top” (Hint: Have a look at Step 4 to select the right gesture directions).

You can add several interactions to an element as shown below.

Add Tap Gestures to the "My Gallery"

Add Tap Gestures to the “My Gallery”


Step 3: Add overlays

On the page “My Pictures” we would like to have an overlay that opens when the user taps on a gallery. Within the overlay, we want to display a sequence of pictures (or videos) that the user can scroll through.

To create an overlay, we need to define the content of the overlay on a separate prototype page. So, let’s create a new page for the overlay. When including overlays, do not forget that they are usually a little smaller than the normal pages. You can adjust the page size via the Context Menu of the respective page, for example in the breadcrumb navigation.

Now we need to add a new tap interaction to the gallery elements. To do so, select the trigger element (here: the image called “Gallery 1”) and add a new interaction via the Context Menu. In the Interaction Dialog, choose the “taps” as the user action and the “show overlay” a the system reaction. Then select the overlay page as content to be shown (here: Gallery 1 – Pic 1). This will show an overlay on the “My Pictures” page when the respective image is tapped. You can add this type of interaction to all gallery images.

Add Overlay

Add an Overlay to “My Gallery”

Now we would like to allow the user to scroll through all images of the gallery. For this, we need several more overlay pages (one for each picture) that will be linked in a certain order to allow the user to scroll through. We also need a forward/backward option on each overlay. Let’s start with the first overlay page (here: Gallery 1 – Pic 2). Add an image and little arrows to it to allow the user to click through. Finally, add tap actions to these arrows to link the previous and next gallery picture. Do this for each own overlay page.

To allow the images to be clicked through within the same overlay frame without reloading the entire background page, do not forget to select the option “instant link within same frame” when linking the individual overlay pages. This will allow you to show the next image within the overlay without reloading the background page in the simulation!


Step 4: Add swipe gestures

Now we would like to make the image gallery a little fancier by allowing users to swipe to see the next image. To do this, we need to add some swipe interactions. Open the Context Menu of the picture on the first overlay and click on “Add Interaction” (or add the swipe gesture to the overlay page directly). In the Interaction Dialog select the “swipes” as the user action. As we want to scroll towards the next image on the right, the user must swipe to the left, i.e. the swipe direction must be set to “left“. Select “show page” as the system reaction and pick the next overlay page from the “Page/URL” dropdown. To show the next image within the overlay choose the display option “instant link within same frame“. If we want to swipe back and forth (e.g. on images in the middle of the gallery), we need to add two interactions to the picture (one to swipe left and one to swipe right).

Add a Swipe Gesture

Add a Swipe Gesture to “My Gallery”

Finally, we need an option to close the overlay and end the slide show, so that we can go back to our starting page (here: “My Pictures”). For this, add an “Close” icon (to be found in Icon Stencils in the “Symbols” section) at the top right of the overlay and add the interaction pair “When the user taps, then hide overlay“.

Hint: If you have multiple pictures and want to display the “Close” icon on every overlay page, use the Copy & Paste function to copy the arrows and icon to every overlay and simply change the Page/URL on each overlay.


Step 5: Add screen transitions (animations)

Now we’re almost done. But there’s a final touch we can add. Let’s use animations like “slide in” to simulate page transitions from one image to he next. To do so, go back to the overlay pages and find the “swipe” interactions in the “My Interactions” panel on the right. Selecting an interaction will open the Interaction Dialog. There you can add an animation to the system reaction. Use “slide in from right” for “swipe left” interactions and “slide in from left” for the “swipe right” interactions.

Interaction Dialog of Overlay

Interaction Dialog of  the Overlay

Now you can add some more touch and swipe gestures to the app to make the page “My Videos” as well as the headers and footers and the galleries not containing pictures so far interactive.

For example, I added a pinch action to the element “Latest Drawing” in the “Picture Gallery” , too. To do so, open the Context Menu and select the User Action “pinches” and chose the direction “pinch out” to create the impression that the element gets enlarged. As an overlay should pop up, define the System Reaction “show overlay” and link the respective page. To minimize the drawing in the overlay again, use the Action Area stencil in the overlay “Free Drawings”. After adjusting the width and height of if, open the Interaction Dialog and select the pairing “pinches” and “hide overlay”. (The overlay can still be closed with the “Close” icon.)

Add an Action Area

Add an Action Area

So far, a tap on a gallery opened an overlay. But to point out that the “Drawings” are opened by a pinch action, we can add a system alert to the “Latest Drawings” element on the page “My Pictures”. To do so, open the Interaction Dialog of the Rectangle, select the pairing “When the user taps, then show a system alert” and simply insert your message (here: Pinch! Pinch to enlarge your latest drawing!).


That’s it! We have successfully created an interactive prototype with touch gestures and screen transitions! Do you need help with creating interactions? Just drop us a line via support@pidoco.com or Facebook and Twitter.


Happy Prototyping!


In my next column I will show you how to add device motions to your prototypes and will tell you more about the sensors embedded in your mobile devices.


* Please note: To view this prototype you need to be logged in to your Pidoco account. You can also test this mobile prototype on your mobile device. I configured it for the iPad.