1980k 1980k

Foundation ANAR Clever Advertisement

This is one of the cleverest print advertisements I’ve ever encountered. The ANAR Foundation, which manages a dedicated hotline for children at risk, put out an ad about child abuse.

Viewed normally, the ad shows a child and reads, roughly: ”Sometimes child abuse is visible only to the child suffering it.”

image


However, they also calculated the average height of a ten-year-old and, using lenticular printing, added a layer of imagery and text that’s only visible from a child’s-eye-view. 

To children, the poster shows the child wounded, the ANAR hotline and reads: "If someone’s hurting you, phone us and we’ll help you".

image
Read More
1980k 1980k

Jonathan Ive is not a Graphic Designer

Jony Ive, at a talk I was fortunate to attend a few years ago (sorry, no linkable write-up exists):

One of the things that’s interesting about design [is that] there’s a danger, particularly in this industry, to focus on product attributes that are easy to talk about. You go back 10 years, and people wanted to talk about product attributes that you could measure with a number. So they would talk about hard drive size, because it was incontrovertible that 10 was a bigger number than 5, and maybe in the case of hard drives that’s a good thing. Or you could talk about price because there’s a number there.

But there are a lot of product attributes that don’t have those sorts of measures. Product attributes that are more emotive and less tangible. But they’re really important. There’s a lot of stuff that’s really important that you can’t distill down to a number. And I think one of the things with design is that when you look at an object you make many many decisions about it, not consciously, and I think one of the jobs of a designer is that you’re very sensitive to trying to understand what goes on between seeing something and filling out your perception of it. You know we all can look at the same object, but we will all perceive it in a very unique way. It means something different to each of us. Part of the job of a designer is to try to understand what happens between physically seeing something and interpreting it.

I think that sort of striving for simplicity is not a style. It’s an approach and a philosophy. I think it’s about authenticity and being honest. Not just taking something crappy and styling the outside in an arbitrary disconnected way.

Over the last week the Internet has been aflutter with rumors about an Ive-driven iOS design overhaul. Nearly all of the conversation has been focused on the visual aesthetic, and that’s not surprising. To use Ive’s words, that is “easy to talk about.”

But the overhaul of iOS is certainly more than skin deep.

  • A purely visual overhaul would not have a WWDC deadline. iOS 7 is not expected to be released until the fall, and betas could use old visuals until then. The fact there is a WWDC deadlines suggests there are real functionality changes that developers need to know about.

  • Ive does not lead visual design. He leads Human Interface design. Perhaps this sounds pedantic, but the distinction between graphic design and interface design apparently escapes a lot of people. One of the more asinine quotes I saw last week came from Maggie Hendrie, the Chair of Interaction Design at the Pasadena Art Center, in Wired:

    The very fact that we’re talking about who’s going to design the icons…is a little bit of a concern. Because that’s not innovative.

    Perhaps the problem is the talking?

  • Ive himself wouldn’t stand for simply making a new skin. Look again at that quote: he derides the very idea of separating what is on the surface from what the product actually is.

As with most such debates, “skeumorphism” versus “flat” is devoid of crucial context. When the iPhone came out, nobody used touch devices.1 The signaling benefits of skeumorphism were very useful,2 especially since most iPhone buyers were buying their first iPhone.

Today, as the premium smartphone market moves towards saturation, an increasing portion of iPhone users are buying their second device. The context is changing. And, likely, so is iOS.

Source

Read More
1980k 1980k

Woodland Wiggle. Interactive games on a giant television at the Royal London Hospital. Source

Read More
1980k 1980k

Submergence explores immersion and interaction within volumetric light based visuals created through 8,064 individual points of light. Source

Read More
1980k 1980k

Rubber Duck [UX] Debugging

The basic premis is that you debug your code by explaining it to a rubber duck, line by line, and hopefully realise where the bug lies in the process.

If you’re going to be spending $75 per person on user testing, then you’ll want to make sure you’ve spotted as many “buggy” interactions as possible before anything gets built.

As this is intended to be a practical guide, I’ve created a fictional mobile app to use as an example (so just squint and pretend it’s your own app).

Step 1: Define your flows with User Stories

Be clear about what your app does. For my example app, I wrote:

{name of app} is a point of sale app for iPhone that enables waiters to take a drink and food order at the table and charge customers directly.

Then, you’ll need to have at least one user story like:

As a waiter, I want to take an order of 3 green teas, 1 water and 1 sandwich so that I can charge the customer.

These user stories define the flows through the app that we’re going to be testing. If you want more information on storytelling in design, check out Braden Kowitz’s post.

Step 2: Wireframe your flows

I’d normally start this process in Omnigraffle. Maybe you prefer pen and paper or some other app like Sketch(If you use something else, leave a comment about it here).

Taking the order of 3 green teas, I work out all the pages I’ll need to complete the transaction. I also loosely work out how they will connect to each other i.e. what order the appear in (the flow).

image

This gives me a todo list of pages I need to create wireframes for.

image

You could (and probably should) use this fidelity of wireframe to start testing, but I’m going straight for the final, eye candy version with a “here’s one I made earlier” approach.

Step 3: Prepare your fake screens

image

So I know which screens I’ll need and I know what data goes on each screen. Next I’ll take those wireframes and start the glossy design with shape, contrast, spacing and type (far left) before moving on to applying brand and colour. I use Silkscreen for previewing on devices as I go. Final design:

image

For each page, I’ll use Photoshop Layer Comps to show variations e.g. a receipt page and the same page with a “paying” popup over the top.

image

Step 4: Fake it ‘til you make it

Next up you’ll need Keynote and Liveview (for mac and ios).

This is the Rubber Duck Debugging part. It’s important to note that it’s not the final output that’s important here, it’s the building of fake app that will highlight interaction bugs as you go.

So let’s build that fake green tea order:

image

  1. In Keynote, create a new presentation.
  2. Open the Inspector.
  3. Resize your presentation to match your phone. 640x1136 for iPhone 5 for example.

Now you’re ready to cut/paste from Photoshop to Keynote. Switching through your Layer Comps, follow the user stories and flow you defined earlier to get a list of sequential slides in your presentation.

image

You’ll end up with something like this. To test my “3 green teas” order, I ended up with about 15 slides. Some were almost identical with just a different total price showing so it’s not as much work as you think.

Most of the time your fake app will be linear flow. If you want to double-back through the app (re-visiting the start page for example) you can do this by using a transparent shape enabled as a hyperlink to another slide:

image

At this point, if you play the slideshow, you should be able to click through the app and fake taking a real order.

Congratulations! You just rubber duck debugged your flow.Hopefully by forcing yourself to mimic a real flow, you will have spotted some early UX problems. The kind of stuff you want to find before a developer spends a week putting a basic feature together.

Step 5: Take the red pill (optional)

As nice as it is to have a flow running on your screen, wouldn’t it be even nicer to actually test it on a device? As I work remotely I also film using the app to share with stakeholders so that they can view the functionality first hand.

You’ll need to modify Keynote preferences and “Allow Exposé, Dashboard and others to use screen”. This will enable you to move the LiveView capture area.

image

Launch LiveView both on your mac and your iPhone, set it to “High Quality” and Interactive:

image

Your Keynote should now be being broadcast live to your phone and it should feel as though you’re actually using the app (bonus points if you used Keynote animations to make it more iPhone-y).

As I mentioned, I also film using the app to send to my team so they can comment on functionality. My setup looks something like this:

image

Unfortunately, I don’t seem to be able to embed video, but you can visit this Vimeo page to see the kind of results you should expect.


To give you an idea of the kinds of bugs you might find using this method, here’s what I found when making the above point of sale app. I noticed a problem where every time I selected an item, say a tea, a coffee or sandwich I aways went back to the top level menu (rather than staying on the drinks menu for example). This is logical because you can’t infur what the next item would be.

However, this gets annoying if you want to select 3 coffees. I only noticed this when I went to add 3 green teas in a row. Once I found this UX bug, it was easy to fix, but I wouldn’t have thought this problem until much later on in the dev cycle, rather than at this early design stage.

Source

Read More
1980k 1980k

Pasting Layer Styles

I guess everyone knows that you can copy/paste layer styles through the menu Layer → Layer Style → Copy/Paste Layer Style. But what I missed is a function that will paste effects by adding them to the current layer instead of replacing everything. Let me show the example.

Prologue

Let’s say you have a circle on your canvas with Stroke effect on it and you have in your clipboard style with Inner Shadow. If you’ll paste it via default Paste Layer Style it will replace everything and leave you with sad circle with Inner Shadow:

Not cool. Sometimes I need to apply one global layer style effect (Drop Shadow for example) to few layers that are already have their own layer styles. I have to do add this style manually. Tedious process.

Dear diary

So, to solve this problem I decided to write a script. 
When I made it, as usual I wanted to assign hotkey to it. As I wrote before, for working with Layer Styles I have these shortcuts: 
1. cmd+ctrl+c — Copy Layer Style 
2. cmd+ctrl+v — Paste Layer Style 
3. cmd+ctrl+x — Clear Styles 
4. cmd+ctrl+t — Scale Effects 
And I decided to assign to the script hotkey cmd+ctrl+alt+v.

Imagine my face, when Photoshop showed me this warning:

Photoshop already has this function! And it has same hotkey that I wanted to assign to my script!

Mysterious part

But the most interesting thing is that there’s no such menu item in Photoshop at all. Like they made function, but didn’t added it to the GUI. I guess that’s why warning says, that we can’t assign different hotkey to the “Paste layer effects without replacing the current effects”—we just can’t access this functon in the Keyboard Shortcuts… dialog.

The essence of the post in one picture

So, this time instead of script I have a hotkey for you:

Paste layer effects without replacing the current effects

Unfortunately, I don’t know analog combination for the Windows. We tried with my brother to press instead of cmd: grey alt, shift — nothing happened. If someone will find such hotkey for Windows — please reach me out on Twitter or in comments, I’ll update the post.

Yep

I wonder what other hidden functions Photoshop actually have.

Imagine what if …

Source

Read More
1980k 1980k

BBC Responsive Images

Here’s a quick overview of our strategy for delivering images into a responsive web page. I didn’t want to add to the debate on responsive images, as its already a well documented topic, but recent industry chatter means we need to clarify our position.

In his Responsive Day Out presentation, Paul Robert Lloyd spoke about the use of images on the responsive BBC News website. He said (and please correct me if I misinterpreted this Paul), while referring to the homepage: that the “first image is important enough to be in the markup”, and that “… all other images are considered enhancements. If not there, the experience is not degraded at all.”

I think Paul misinterpreted the importance given to the first image in the page, because it was in the markup by default, its meaning was more important than the other images in the page (which we add using JavaScript). We do this with other types of content, anything that we consider to be secondary info is linked to with hyperlinks and replaced with the content of the linked to document via JavaScript. We call this approach transclusion.

The semantic importance ascribed by Paul to images added directly into the page is not quite correct. Yes the first image is in the markup and will be loaded by “HTML4” and “HTML5” browsers (click here for a definition of both), but this is more a technical consideration than an editorial one.

Although the rest of the images are loaded via JavaScript, we don’t consider them to have any less semantic importance over the first image. We load in images via JavaScript so users don’t load two different resolutions of the same image. Unfortunately, because “HTML4” browsers are not presented with JavaScript, they won’t see any images. Our editorial colleagues weren’t happy with this, so we compromised with them and agreed to add one image per page into the markup that all users would get regardless of their browser.

You can see it for yourself, disable JavaScript and then load our homepage. The first article will have a poor quality image. But if you re-enable JavaScript and load the page again, you’ll see it gets replaced by an image more appropiate to your screen resolution. This isn’t a mistake, yet it goes against one of our core principles: “Only download what you are going to use”.

Our responsive images process

There are two steps to adding responsive images to a page:

  • Creating multiple versions of the same image at different resolutions
  • Deciding which image to use at run time

As the BBC News site publishes MANY articles everyday, many images are published too. BBC News has an automated process to create 18 different versions of each published image.

When a journalist publishes an article via our CMS to the live site, any images associated with it are also published to a live URL. The link from the article to the image is modified by our PHP application layer, it adds a resolution size into the URL. The URL is changed from…

http://wwwnews.live.bbc.co.uk/media/images/67431000/jpg/_67431205_firing.jpg

to…

http://ichef.bbci.co.uk/news/235/media/images/67431000/jpg/_67431205_firing.jpg

This new URL points to our image resize service that we call “Image Chef”. Image Chef sits between our website and the static image server, creating new versions of the original image based on instructions that we set for each image resolution size.

On each of our webpages there are two elements that are relevent to our responsive image solution. We have the image that is double loaded at the top of each page:

<img src="http://ichef.bbci.co.uk/news/235/media/images/67431000/jpg/_67431205_firing.jpg" class="image-replace" alt="image" />

And there are divs with the class “delayed-image-load”:

<div data-src="http://ichef.bbci.co.uk/news/235/media/images/67431000/jpg/_67431205_firing.jpg" class="delayed-image-load"></div>

If the user has a “HTML5” browser, JavaScript is loaded into the page that will convert these divs into images. The JavaScript will also check the width of each image element and change the URL to download the appropiate image. A resize event is also added onto the window object. If the user resizes the browser then the JavaScript will recheck all the widths of the image elements and change the URL if neccessary.

When the JavaScript was first written in 2011, we didn’t know much about responsive images. We made an arbitary decision to add a breakpoint every 30px from 96px up to 736px. This wasn’t based on any evidence or rationale. It was informed more by the image sizes that fit into the old grid system available in BBC’s GEL layout guide.

Issues with responsive images

There are a few notes to make about this. A great post by Jason Grigsby proposes using a break point rationale based on file size. In the comments there is a discussion about choosing which version of the image to download based on element size versus viewport size.

We’re currently using element size. As the images are loaded in by JavaScript, it means images can’t start to download until the JavaScript is in the page. This means waiting for domReady, and also for our AMD loader to download and call in the rest of our JavaScript. This blocks the rendering of images until all the JavaScript executs.

At the moment making the choice based on element size is the best fit for us, as element widths do not correlate to viewport widths. We use images in many different contexts, stretched across columns, left aligned, etc, so we have to use element width. We do use a grid, so maybe in the future there will be a clever way to match the element width to the viewport width using a grid system.

Jvaux from Facebook states in the comments that scaling images is also a problem, especially when the page scrolls. We definitely worry about processing time as well as download time. This would suggest the process of using an image that best fits the width of the element has a better effect on DOM reflow rather than scaling images up or down.

Read More
1980k 1980k

Colour management and UI design

Most people who design web sites or apps in Photoshop will, at one point or another, have had issues trying to match colours in images to colours generated by HTML, CSS or code. This article aims to solve those problems once and for all.

Colour management attempts to match colours across devices

In the printing world, colour management typically involves calibrating your entire workflow, from scanner or digital camera to computer display to hard proofs to the final press output. This can be quite a tall order, especially when the devices use different colour spaces — matching RGB and CMYK devices is notoriously hard.

When designing or editing for TV, it’s common for the main editing display to be calibrated and for a broadcast monitor to be used — these show a real time proof for how the image might look on a typical TV in a viewer’s home.

In those scenarios, colour management offers many benefits and is highly recommended.

When building web and application interfaces, the situation is a little different. The final output is the same device that you’re using to create the artwork — a computer display. (Please ignore the differences in gamma between Windows, OS X prior to 10.6 and the iPhone for now, as these are covered later.)

There is a catch though. Even though you’re creating your web or app interface on the same device the final product will be shown on, there’s various sources for colours: images (typically PNG, GIF and JPEG), style markup (CSS) and code (JavaScript, HTML, Objective-C etc). Getting them all to match can be tricky.

The goal

When designing websites or app interfaces, we’d like to perfectly match the colours displayed on screen in Photoshop and saved in files with what’s displayed in other applications, including Firefox, Safari and the iOS Simulator. We want the colours to not just look the same, but the actual values saved into files to match the colours we defined in Photoshop perfectly.

Colours can not shift or appear to shift in any way, under any circumstance.

Why is this so difficult?

Photoshop applies colour management to images displayed within its windows and also to the files it saves. This is a bad thing if you’re working exclusively with RGB images that are for web or onscreen UI. With the default Photoshop settings, #FF0000 can display as #FB0018 and #BB95FFas #BA98FD. The exact values will depend on the display and profile you’re using, but differences definitely exist with the default Photoshop settings.

How does Photoshop’s colour management differ to colour management in OS X and Windows?

OS X’s colour management is applied to the entire display at the very end of the processing chain, after the main buffer in video ram. This means that although colour management is being applied, software utilities that measure colours on screen (like/Utilities/DigitalColor Meter) will still report the same values as you saved in the file or entered into your code. I believe the colour management in Windows Vista and Windows 7 (Windows Colour System) works in a similar fashion.

Photoshop’s colour management is applied only to the image portion of its windows and to the files it saves. This colour correction happens as Photoshop draws the image on screen, so software utilities that measure colours on screen often report different colours to the ones you specified. It’s worth noting that OS X’s colour management is applied on top of Photoshop’s.

The best solution I’ve found is to disable Photoshop’s colour management for RGB documents as much as possible. Doing so forces RGB colours on screen and saved to file to match the actual colour value. If you need your monitor to be calibrated a specific way, then you’ll be best served by changing it at an OS level for web and app design work.

Disabling colour management used to be quite easy in Photoshop CS2 and all versions prior, but now requires a little more skill.

Disabling Photoshop’s RGB colour management

These instructions are for Photoshop CS4 & CS5 on Mac and Windows. Setting up CS3 is very similar.

Step 1 — Choose Edit > Color Settings and set the working space for RGB to Monitor RGB.

Step 2 — If you’re using Photoshop CS6, click More Options and turn off Blend Text Colors Using Gamma, because it changes how non-opaque text is rendered. It should already be off, but if it’s not, turn off Blend RGB Colors Using Gamma as well.

Step 3 — Open a document and choose Edit > Assign Profile, then set it to Don’t Color Manage This Document. This must be done for every single document you work on.

Step 4 — Ensure View > Proof Colors is turned off.

Step 5 — When saving files with Save For Web, ensure Convert to sRGB is turned off. If you’re saving a JPEG file, then also turn off Embed Color Profile (there are some cases where you might want this on for photos, but chances are you’ll want it off for interface elements and icons).

The difference between “Assign Profile” and “Convert to Profile”

Now might be a good time to mention the difference between “Assign Profile” and “Convert to Profile” so you’ll know when to use the appropiate function.

Each Photoshop document contains a colour profile that’s separate to the actual colour data stored for each pixel. “Assign Profile” simply changes the profile in the document, without affecting any of the colour data. It’s a non-destructive action — you can assign a new colour profile to your documents as often as you like without any fear of doing damage. Assigning a new profile may change the way your document appears on screen, but the data contained in the file will be unaltered.

“Convert to Profile” is quite different. It not only assigns a colour profile to a document, but it tries to keep your image looking the same on screen. It does this by processing the colour data contained in the file for each pixel. Converting to a new profile is more likely to maintain the way your document appears on screen, but the data contained in the file will be permanently changed. Use with caution.

If you’re copying layers from one Photoshop document to another, it’s a good idea to ensure both documents have been assigned the colour profile.

Illustrator is the same as Photoshop

If you’d like images saved from Illustrator or imported from Illustrator to Photoshop to match as well, then follow the steps below. These instructions are for Illustrator CS4 & CS5 on Mac and Windows. Setting up Illustrator CS3 is very similar.

Step 1 — Choose Edit > Color Settings and set the working space for RGB to Monitor RGB.

Step 2 — Open a document and choose Edit > Assign Profile, then set it to Don’t Color Manage This Document. This must be done for every single document you work on.

Step 3 — Ensure View > Proof Colors is turned off.

Step 4 — When saving files with Save For Web & Devices, ensure Convert to sRGB is turned off. If you’re saving a JPEG file, then also turn off Embed Color Profile (there are some cases where you might want this on for photos, but chances are you’ll want it off for interface elements and icons).

Gamma differences

Windows has used a gamma of 2.2 since its introduction. OS X has used a gamma of 1.8 for all versions prior to Snow Leopard. Snow Leopard, Lion and Mountain Lion all use a gamma of 2.2. What does this mean? Before Snow Leopard, web pages looked darker on Windows. Thankfully, both operating systems are in sync now, so the same web page design should look very similar on a Mac and a PC that are using the same monitor.

Information about the iOS’s gamma is a little hard to come by, but I believe it’s 2.2. This is one of the reasons why it’s also a good idea to test your interface on an iPhone or iPad.

Device testing

There’s a good chance that your iPhone, iPod or iPad’s screen will look different to your computer’s display. Screen types, warmth of colors and even sub-pixel patterns vary greatly, so you will probably want to tweak the design after seeing everything in situ. Some display types, such as AMOLED, can appear far more saturated and with much higher contrast than typical computer LCDs. Not to mention, seeing your design on the device is exciting.

There are many ways to get your final mockup onto a mobile device. We weren’t happy with the options available, so we built our own tool for the job. Skala Preview lets you view realtime design previews on your device, as you edit in Photoshop. Previewing your design in situ lets you test tap sizes, text sizes, colour, contrast and ergonomics, all at a time where changes can be easily made — during the design process. It closes the loop, meaning you can iterate faster to a better final design.

Handy tools for Mac users

On Mac, moving colors between Photoshop and code can be made easier withDeveloper PickerHex Color Picker and Colors (all free).

Conclusion

You’ll now be able to move bitmap and vector images between Photoshop and Illustrator without any color shifts at all, using any method. You’ll also be able to use the color picker in Photoshop to grab a color, then use the same HEX color value in your CSS, HTML, Javascript, Flash or Objective-C code and it’ll match your images perfectly.

I hope this article helped. If you have any questions then feel free to ask us via Twitter.

Source

Read More