1980k 1980k

Design Practices for Touch-free Interactions

Article: Source

Touch interaction has become practically ubiquitous in developed markets, and that has changed users’ expectations and the way UX practitioners think about human–computer interaction (HCI). Now, touch-free gestures and Natural Language Interaction(NLI) are bleeding into the computing mainstream the way touch did years ago. These will again disrupt UX, from the heuristics that guide us, to our design patterns and deliverables.

HCI Is Getting “All Shook Up”

With touch, people were given a more natural, intuitive way to interact with computing devices. As it moved into the mainstream, it opened up new interaction paradigms. Touch-free gestures and NLI, while not exactly new, are just now moving into the computing mainstream thanks to Microsoft and Apple, respectively. If they catch on, these interaction models may open up further paths toward a true Natural User Interface (NUI).

Touch-free gestures

Microsoft’s Kinect popularized this interaction model for the Xbox gaming platform. Following that, they havemade the Kinect available for Windows, and smart TVs like Samsung’s are moving touch-free gestures out of gaming and into our everyday lives.

Kinect for Windows has an interesting feature called near mode, which lets people sitting in front of a PC use the touch-free gestures without standing up. A benefit for productivity software is the reduced need for interface elements: on-screen artifacts can be manipulated more like physical objects. Another benefit is the ability to use computers in situations when touching them is impractical, like in the kitchen or an operating room.

Natural-Language Interaction

The idea of talking to our computers is not new, but only with the success of the iPhone’s Siri is it coming into the commercial mainstream. The best part of NLI is that it mimics the ordinary way humans interact with each other, which we learn very early on in life.

Not only does NLI (when done well) make the interaction with a computer feel very natural, it also primes people to anthropomorphize the computer, giving it a role as a social actor. Giving voice to a social actor in people’s lives bestows immense power upon designers and content creators to build deeply emotional connections with those people.

Fundamentals of Interaction

While new technologies offer greater opportunities for richness in interaction, any such interaction occurs within the context and constraints of human capability. As we move forward to welcome new interaction models, we are building a knowledge infrastructure to guide UX practitioners in taking advantage of them. HCI fundamentals serve as our theoretical basis.

HCI Model

Bill Verplank’s model of human–computer interaction describes user interactions with any system as a function of three human factors:

  • Input efficacy: how and how well we sense what a system communicates to us
  • Processing model: how and how well we understand and think about that communication
  • Output efficacy: how and how well we communicate back to the system

By using design to support these elements of the user experience, we can enrich that experience as a whole. These human factors serve as a useful thought framework for envisioning and evaluating new design heuristics and patterns.

Heuristics

While the heuristics we know and love still apply, some additional ones will help us make better use of touch-free gestures and NLI. Here are some examples:

  1. Gestures must be coarse-grained and forgiving (output efficacy) - People’s hand gestures are not precise, even on a screen’s surface. In mid-air, precision is even worse. Expecting too much precision leads to mistakes, so allow a wide margin and low cost for errors.
  2. A system’s personality should be appropriate to its function(processing efficacy) - When speaking with something, people automatically attribute a personality to it. If it speaks back, its personality becomes better defined in users’ minds. Making a productivity program (e.g., Excel or Numbers) speak in a friendly and understanding way could make users hate it a little less. Making an in-car navigation system firm and authoritative could make it feel more trustworthy. Siri makes mundane tasks such as planning appointments fun because Apple gave her a sense of humor.
  3. Users should not be required to perform gestures repeatedly, or for long periods of time, unless that is the goal (output efficacy - Repetitive or prolonged gestures tire people out, and as muscle strain increases precision decreases, affecting performance. If the idea is to make users exercise or practice movement though, this doesn’t apply.
  4. Gestural and speech commands need to be socially appropriate for user’s environmental contexts (input and output efficacy) - The world can see and hear a user’s gestures and speech commands, and users will not adopt a system that makes them look stupid in public. For example, an app for use in an office should not require users to make large, foolish-looking gestures or shout strange verbal commands. On the flipside, including silly gestures and speech into a children’s game could make it more interesting.
  5. Affordances must be clear and interactions consistent (processing efficacy) - As Josh Clark andDan Saffer remind us, interface gestures are more powerful but less discoverable than clickable/tappable UI elements. The same goes for speech. Gestures and speech in an interface each adds an additional interaction layer, so clear affordances and consistent interaction are critical for those layers to work.

Patterns

Since touch interfaces became mainstream, gesture libraries like this one have appeared to help designers communicate touch interactions. Not limited to the X and Y axes on a screen, touch-free gestures allow designers to leverage depth and body movements. Combined with natural speech interaction, designers can create rich multi-modal interactions, where, for example, a user’s hands control one aspect of the system while her voice controls another.

"Near Mode" Gestures (Explicit Input)

The original Kinect uses full-body gestures, but near mode is what makes the new version so special. Here are some “sitting in front of your PC” gestures, with illustrations based on this gesture library by think moto.

  1. Swipe, Spread, and Squeeze: These are the most basic gestures and represent full-hand, touch-free translations of finger-sized touch gestures.

    Swipe-Spread-Squeeze Gesture
  2. Push and Pull: Users could use these gestures to move on-screen artifacts closer or farther away.

    Push-Pull Gesture
  3. Grasp and Release: Hand-level Spread and Squeeze gestures can be used for interface-level zooming actions, so finger-level pinching can be used to grasp on-screen artifacts. When a user “takes hold” of such an artifact, she can manipulate it using dependent gestures.

    Grasp-Release Gesture
  4. Twist: One such dependent gesture is Twist. Now that artifacts can be manipulated in a third dimension, a “grasped” artifact can be twisted to change its shape or activate its different states (e.g., flipping a card or rotating a cube).

    Twist Gesture
  5. Throw: Besides twisting an on-screen artifact, a user can throw it to move it a long distance quickly. This could trigger a delete action or help users move artifacts in 3D space.

    Throw Gesture
Body Sensing (Implicit Input)

In addition to new opportunities for gestural input, a Kinect sensor on PCs could detect other aspects of our body movements that might indicate things like fatigue or emotional state. For example, if gestural magnitude increases (e.g., if a user’s gestures become bigger and wilder over time), the system could know that the user might be excited and respond accordingly. With productivity software, such excitement might indicate frustration, and the system could attempt to relax the user.

Another implicit indicator is gestural accuracy. If the system notices that a user’s gestures are becoming lazier or less accurate, the system might recognize that the user is tired and encourage him to take a break. And although it’s not a strictly implicit indicator, sensing whether a user is seated or standing could enable different functions or gestures.

Speech Interaction

The complexity of human languages makes it less straightforward to create discrete patterns for NLI than it is for gestures. However, there are structures inherent in language that could provide a basis for such patterns.

For example, there are two major types of user input: questions (to which a system returns an answer) and commands (which cause a system to perform an action). Looking deeper, individual sentences can be broken down into phrases, each representing a different semantic component. Books such as Speech Technology, edited by by Fang Chen and Kristiina Jokinen, show how the fields of linguistics and communication provide valuable insight to designers of natural-language interfaces.

Deliverables

Perhaps the biggest challenge with using novel interaction models is communicating with stakeholders who are not yet familiar with those models. Visualizing things that don’t exist is difficult, so UX designers will have to find new ways to communicate these new interactions.

Specifications

Specifications

Interaction design specs are still two-dimensional, because that’s all they need to be. For some touch-free gestures though, variables such as “distance from the screen” or “movement along the z-axis” will be important, and those are best visualized in a more three-dimensional way.

Speech interactions are even more complex. Since a user’s interaction with the system literally becomes a conversation, the design needs to account for extra contingencies, e.g., users expressing the same command with different words, accents, or intonations. A natural language interface must accommodate as many of these variations as possible.

Also, there are more variables that play a role in the way a system communicates to users. Its tone, choice of words, cadence, timbre, and many other factors contribute to how a user perceives that system.

Personas

Because speech-enabled computers will become social actors in people’s lives, their (perceived) personalities are of critical importance. Thankfully, we can apply existing skills to doing that.

UX professionals have traditionally used personas to illustrate types of users. With computers communicating like humans, the same technique could be used to illustrate the types of people we want those computers emulate. System personas could be used to guide copywriters who write speech scripts, developers who code the text-to-speech engine, or voice actors who record vocal responses.

For a system to show empathy, it could measure changes in users’ speech patterns, again recognizing excitement, stress, or anxiety. If a user is getting frustrated, the system could switch from an authoritative persona (showing reliability) to a more nurturing one (calming the user down).

Prototyping

Responding elegantly to users’ body movements and speech patterns requires a nuanced approach and clear communication between stakeholders. Along with specifications, prototypes become more important because showing is more effective than telling. This holds true for both testing and development.

For now, there is no software package that can quickly prototype Kinect’s touch-free gestures; downloading the SDK and building apps would be the only way. For speech though, there are a few freely available tools, such as the CSLU toolkit, which allow designers to quickly put together a speech interface for prototyping and experimentation.

Until prototyping tools get sophisticated enough to be fast, flexible, and effective, it looks like we’re going to have to get back to basics. Paper, props, storyboards, and the Wizard of Oz will serve until then.

The Big Wheel Keeps on Turning

User interfaces for computers have come a long way from vacuum tubes and punch cards, and each advancement brings new possibilities and challenges. Touch-free gestures and natural language interaction are making people’s relationship with computers richer and more human. If UX practitioners want to take full advantage of this changing relationship, our theories and practices must become richer and more human as well.

Let us move forward together into this new interaction paradigm and get closer to our users than ever.

Read More
1980k 1980k

Article: Source
Tranquil Oasis Above the Polluted Urban Life: City in the Sky Concept

Read More
1980k 1980k

The Future Of Touchscreens: Physical Buttons That Appear And Disappear
Source: Article 

Read More
1980k 1980k

Tips For A Finely Crafted Website

Article: Source

Good Web designers know what many others might not realize: that creating a truly beautiful website requires care, time and craft. And similar to how a craftsperson molds their creation by combining raw materials, skill and unwavering focus on the vision, a beautiful design is planned and executed with exceptional focus on what is to be achieved by the website.

It is important, however, not to confuse a beautifully crafted website with one that simply brushes over the content with attractive visuals. This article provides a smallselection of tried and true methods that Web designers regularly employ to give a website that bespoke look and feel (think a tailor who carefully cinches a suit here and there to ensure a perfect fit).

Make no mistake: these methods do take extra time, and they often result in improvements that the untrained eye might not consciously register. But the payoff is a better overall experience for the user. Users will leave with a smile and a lasting impression or relationship with your website, even if they can’t quite put their finger on why.

(Smashing’s side note: Have you already bought the brand new Smashing Book #3? The book introduces new practical techniques and a whole new mindset for progressive Web design. Get your book today!)

Start From Scratch, Every Time



For a truly customized design, always start with a blank slate.

Whenever possible, avoid cobbling together work from old designs. Every website should be treated as unique, regardless of its type (e-commerce, author website, blog, etc.). The very best websites are designed to fulfill a defined purpose. Building your design from the ground up fosters clarity, focus and commitment to the design.

At every step in the design process, ensure that everything in the layout has a deliberate purpose; you should be able to explain your thought process behind every element on the page. If something has no reason to exist on the page, then consider removing it because it could simply distract visitors from more important content.



Every element on the page should have a reason for existing.

Simply copying and pasting elements from previous designs is a crutch that prevents you from experimenting, learning and growing as a Web designer. This isn’t to downplay the value of iterating on previous design elements, which can be a useful exercise and which can challenge you to get creative and experiment.

Invest In Custom Icons And Graphics

Spotify

Spotify uses custom icons and graphics, rather than relying on stock images, to give their design soul and character.

Custom illustrations, imagery and iconography make for a unique experience on a website. While stock photography and vectors do save a ton of time, a photo of a smiling sales rep wearing a headset just feels phoney to visitors when they’ve seen the same image on five other websites.

Invest time in creating custom icons and graphics to preserve the look and feel of your website, as well as the authenticity of your message.

Spotify is a perfect example of this. Its lighthearted, “sketchy” illustration style sets its apart from all the other music services, and it leaves a lasting, positive impression on visitors.

Some other examples of great websites with custom icons and illustrations:

Remove Friction That Impedes Scrolling



Encourage scrolling by removing obstacles.

As screen resolutions expand and touchscreens multiply at every turn, we’re encountering longer pages and smaller website footprints. And with good reason: a user who has to scroll down the page will encounter far less obstruction than someone who has to click a link and wait for the new page to load. Removing this obstruction will not only simplify the navigation, but help to tell a cohesive story, without the interruption of page loading.

As Paddy Donnelly’s excellent article “Life Below 600px” explains, the fear that users will ignore any content unlucky enough to fall below the fold afflicts designers much less today. And the proliferation of devices of different resolutions makes it almost impossible to determine where exactly the fold lies.



Life Below 600px” is an excellent essay that dispels the myth of the fold.

Weave a cohesive story concept by getting creative with connecting individual page sections. This will promote a natural flow for the user, encouraging them to explore deeper into the page, building momentum in their experience and making it easier to get all the way through the page. You can even tie radically different styles of content together on the same page by adding small visual cues at the bottom of each section to indicate that more content awaits — much like how different rooms in a house can have entirely separate functions yet retain a common theme. If each section on the page comes to an abrupt end or looks like a footer, then users will be less likely to reach the end.

In fact, scrolling has become such a natural interaction on most Web-enabled devices that Apple did away with the scroll bar in OS X Lion.

The Dangers of Fracking and Slavery Footprint deserve hearty mentions. Both websites drive the user to scroll down with incredibly creative parallax effects and compelling stories.

More awesome examples of mega pages:

Make The Design Invisible Through Interactivity And Functionality

After more than 20 years of evolution, the paradigms of the print world still provide the fundamental building blocks of the way we present content online. Think about the terms we use to describe the Web: pages, headlines, columns, scrolling. These are band-aid metaphors that we’ve adopted to make the Web more understandable to the public. But the medium itself is capable of so much more. Static text and images are usually fine, but human beings by nature crave varied stimulation, and the Web is capable of feeding that craving with a much more interactive and richer experience.

Zurb

Joyride uses “Pit stops” of interactivity to keep the visitor engaged with the website.

Providing clear points on the page where users can interact with the design, rather than passively consume it, will help to relieve the burden of wading through long passages of copy. Visitors will experience the invisible part of a design; content sliders, tooltips, lightboxes, modal windows and other points of interaction give them something to do and can propel them further along in the story, much like how a good museum exhibit mixes methods of conveying information. Of course, swing too far to the extreme of too much interactivity and you’ll distract users, so be cautious of how much you build in. All interaction points should serve the overarching goal of the page.

Joyride is a great example of this approach. While the page has plenty of content, Joyride does a great job of guiding the user around the page and highlighting points of interest to come back to later. (And the little surprise at the end will leave you grinning.)

Great examples of engaging users with interaction on the page:

Pay Attention To Detail

Whether you’re going for clean minimalism or complex and illustrative, pay special attention to the details of every element on the page. Even slight inconsistencies will be picked up by users subconsciously, thus diminishing their experience or confusing them.

A few common pitfalls to watch out for:

  • Typography gone wild
    Each typographic treatment in the design should be consistent with its function. Headings in one part of the page should look the same as headings elsewhere on the page, and indeed throughout the rest of the website.
  • Buttons, buttons everywhere
    Be conscious of where a button style is called for, as opposed to plain text links. Overusing buttons diminishes their overall effectiveness.
  • Changing gradients
    If your design has gradients, use them consistently, with the same shades across like elements and with the same gradation.
  • Mind the gaps
    Consistent spacing and alignment between page elements will make the layout feel refined and high quality.

Thoughtfully questioning each element in the layout is key to achieving a highly polished design. The burden is on you to prove that an element has a reason to be there and is not superfluous to the experience. At the end of the day, don’t fall into the trap of thinking “No one will notice,” because the chances are high that someone will!

As Antoine de Saint-Exupéry once said:

Perfection is achieved not when there is nothing more to add, but when there is nothing left to take away.

Consider whether the elements on the page have the right amount of detail to fulfill their purpose. Does that button have the right texture and color treatment to serve its role? Does it need texture at all? Simplicity is key and is relative to the purpose of the element.



Polished designs can be found across the Web, if you look closely.

Polished touches lie all around the Web if you look. Little Big Details rounds up some details from the Web and other interfaces that many of us encounter every day but don’t notice.

Details

To see a truly polished design, head over to PixelResort. To describe it in one word: sumptuous. No detail has been spared for each element. The entire experience has a weight and tangibility that stays with the user.

Some other excellent examples of polish on the Web:

Bringing It All Together

Creating a truly beautiful and memorable website ain’t easy. You’ll need to make asignificant investment of time and effort, focused in key areas for maximum payoff:

  1. Design the entire layout specifically for the given website. Putting a new coat of paint on an old template won’t give you the most compelling design possible.
  2. Create your own graphics to give the design a unique personality.
  3. Motivate visitors to scroll by weaving a story across the page that compels them to finish.
  4. Engage visitors with variety. Adding “rest stops” of interactivity will keep them actively thinking about the experience that you’re leading them through.
  5. Finally, polish, polish, polish! Think about every detail in the design. Make sure nothing is missed.

There you have it: five solid techniques to ensure that your website is a beautiful and memorable experience. This roundup is by no means comprehensive, but the techniques will pay you back with returning visitors, high engagement and user satisfaction.

Do you have any tips on adding beauty to a design? Feel free to post them in the comments!

Read More
1980k 1980k

Responsive Images and Web Standards at the Turning Point

Article Source: Click Here

The goal of a “responsive images” solution is to deliver images optimized for the end user’s context, rather than serving the largest potentially necessary image to everyone. Unfortunately, this hasn’t been quite so simple in practice as it is in theory.

Recently, all of the ongoing discussion around responsive images just got real: a solution is currently being discussed with the WHATWG. And we’re in the thick of it now: we’re throwing around references to picture and img set; making vague references to polyfills and hinting at “use cases” as though developers everywhere are following every missive on the topic. That’s a lot to parse through, especially if you’re only tuning in now—during the final seconds of the game.

The markup pattern that gets selected stands to have a tremendous influence on how developers build websites in the future. Not just responsive or adaptive websites, either. All websites.

What a long, strange, etc.

Let’s go over the path that led us here one more time, with feeling:

The earliest discussion of responsive images came about—predictably enough—framed in the context of responsive web design. A full-bleed image in a flexible container requires an image large enough to cover the widest possible display size. An image designed to span a container two thousand pixels wide at its largest means serving an image at least two thousand pixels wide. Scaling that image down to suit a smaller display is a trivial matter in CSS, but the requested image size remains the same—and the smaller the screen, the better the chance that bandwidth is at a premium.

It’s clear that developers’ best efforts to mitigate these wasteful requests were all doomed to fall short—and not for lack of talent or effort. Some of the greatest minds in the mobile web—and web development in general, really—had come together in an effort to solve this problem. I was also there, for some reason.

I covered early efforts in my previous ALA article, so I’ll spare everyone the gruesome details here. The bottom line is that we can’t hack our way out of this one. The problem remains clear, however, and it needs to be solved—but we can’t do it with the technologies at our disposal now. We need something new.

Those of us working on the issue formed the Responsive Images Community Group (RICG) to facilitate conversations with standards bodies and browser representatives.

“W3C has created Community Groups and Business Groups so that developers, designers, and anyone passionate about the Web has a place to have discussions and publish documents.&rdquo
http://www.w3.org/community/

Unfortunately, we were laboring under the impression that Community Groups shared a deeper inherent connection with the standards bodies than it actually does. When the WHATWG proposed a solution last week, many of the people involved in that discussion hadn’t participated in the RICG. In fact, some key decision makers hadn’t so much as heard of it.

Proposed markup patterns

The pattern currently proposed by the WHATWG is a new set attribute on the img element. As best I can tell from the description, this markup is intended to solve two very specific issues: an equivalent to ‘min-width’ media queries in the ‘600w 200h’ parts of the string, and pixel density in the ‘1x’/’2x’ parts of the string.

The proposed syntax is:

<img src="face-600-200@1.jpg" alt="" set="face-600-200@1.jpg 600w 200h 1x, face-600-200@2.jpg 600w 200h 2x, face-icon.png 200w 200h"> 

I have some concerns around this new syntax, but I’ll get to that in a bit.

The markup pattern proposed earlier by the RICG (the community group I’m part of) aims to use the inherent flexibility of media queries to determine the most appropriate asset for a user’s browsing context. It also uses behavior already specced for use on the video element—in the way ofmedia attributes—so that conditional loading of media sources follows a predictable and consistent pattern.

That markup is as follows:

<picture alt=""> <source src="mobile.jpg" /> <source src="large.jpg" media="min-width: 600px" /> <source src="large_1.5x-res.jpg" media="min-width: 600px, » min-device-pixel-ratio: 1.5" /> <img src="mobile.jpg" /> </picture> 

Via Github, this pattern has been codified in something as close to a spec as I could manage, for the sake of having all the key implementation details in one place.

Polyfills

So far, two polyfills exist to bring the RICG’s proposed picture functionality to older browsers: Scott Jehl’s Picturefill and Abban Dunne’s jQuery Picture.

To my knowledge, there are currently no polyfills for the WHATWG’s newly proposed img set pattern. It’s worth noting that a polyfill for any solution relying on the img tag will likely suffer from the same issues we encountered when we tried to implement a custom ”responsive images” solution in the past.

Fortunately, both patterns provide a reliable fallback if the new functionality isn’t natively supported and no polyfill has been applied: img set using the image’s original src, and picture using the same fallback pattern proven by the video tag. When the new element is recognized, the fallback content provided within the element is ignored—for example, a Flash-based video in the case of the video tag, and an img tag in the above picture example.

Differing proposals

Participants in the WHATWG have stated on the public mailing list and via the #WHATWG IRC channel that browser representatives prefer the img set pattern, which is an important consideration during these conversations. Most members of the WHATWG are representatives of major browsers, so they understand the browser side better than anyone.

On the other hand, the web developer community has strongly advocatedfor the picture markup pattern. Many developers familiar with this subject have stated—in no uncertain terms that the img set syntax is at best unfamiliar—and at worst completely indecipherable. I can’t recall seeing this kind of unity among the community around any web standards discussion in the past—and in a conversation about markup semantics, no less!

We’re on the same team

While the WHATWG’s preferences, and the web developer community’s differing preferences, certainly should be considered as we finalize a standard solution to the problem of responsive images, our highest priority must remain providing a clear benefit to our users: the needs of the user trump convenience for web developers and browser developers alike.

For that reason (for the sake of those who use the web), it’s critical not to cast these discussions as “us vs. them.” Standards representatives, browser representatives, and developers are all partners in this endeavor. We all serve a higher goal: to make the web accessible, usable, and delightful for all. Whatever their stance on img set or picture, I’m certain everyone involved is working toward a common goal, and we all agree that a ”highest common denominator” approach is indefensible. We simply cannot serve massive, high-resolution images indiscriminately. Their potential cost to our users is too great—especially considering the tens of thousands of users in developing countries who pay for every additional kilobyte they consume, but will see no benefit to the huge file they’ve downloaded.

That said, I have some major issues with the img set syntax, at least in its present incarnation:

1. USE CASES

Use cases are a list of potential applications for the markup patterns, the problems that they stand to solve, and the benefits.

I’ve published a list of use cases for the picture element on the WHATWG wiki. It is by no means exhaustive, as picture can deliver an image source based on any combination of media queries. The most common use cases are screen size and resolution, for certain, but it could extend as far as serving a layout-appropriate image source for display on screen, but a high-resolution version for printing—all on the same page, without any additional scripting.

At present, no list of use cases has been published for img set. We’ve been working under the assumption, based on conversations on the WHATWG list and in the WHATWG IRC channel, that img set covers two uses specifically: serving high-resolution images to high-resolution screens, and functionality similar to min-width media queries in the way of the 600w strings.

It’s vital that we have a way to take advantage of new techniques for detecting client-side capabilities as they become available to us, and thepicture element gives us a solid foundation to build upon—as media queries evolve over time, we could find ourselves with countless ways to tailor asset delivery.

We may have that same foundation in the img tag as well, but in a inevitably fragmented way.

2. MARGIN FOR ERROR

I don’t mind saying that the img set markup is inscrutable. It’s a markup pattern unlike anything seen before in either HTML or CSS. This goes well beyond author preference. An unfamiliar syntax will inevitably lead to authorship errors, in which our end users will be the losers.

As I said on the WHATWG mailing list, however, given a completely foreign and somewhat puzzling new syntax, I think it’s far more likely we’ll see the following:

 <img src="face-600-200@1.jpeg" alt="" set="face-600-200@1.jpeg 600w 1x, face-600-200@2.jpeg 600w 2x, face-icon.png 200w"> 

Become:

 <img src="face-600-200@1.jpeg" alt="" set="face-600-200@1.jpeg 600 1x, face-600-200@2.jpeg 600 2x, face-icon.png 200"> 

Or:

 <img src="face-600-200@1.jpeg" alt="" set="face-600-200@1.jpeg, 600w 1x face-600-200@2.jpeg 600w 2x, face-icon.png 200w"> 

Regardless of how gracefully these errors should fail, I’m confident this is a “spot the differences” game very few developers will be excited to play.

I don’t claim to be any smarter than the average developer, but I am speaking as a core contributor to jQuery Mobile and from my experiences working on the responsive BostonGlobe.com site: tailoring assets for client capabilities is kind of my thing. To be perfectly honest, I still don’t understand the proposed behavior fully.

I would hate to think that we could be paving the way for countless errors just because img set is easier to implement in browsers. Implementation on the browser side takes place once; authoring will take place thousands of times. And according to the design principles of HTML5 itself, author needs must take precedence over browser maker needs. Not to mention those other HTML5 design principles: solve real problems, pave the cowpaths, support existing content, and avoid needless complexity.

Avoid needless complexity

Authors should not be burdened with additional complexity. If implemented, img set stands to introduce countless points of failure—and, at worst, something so indecipherable that authors will simply avoid it.

I’m sure no one is going to defend to the death the idea that the video andaudio tags are paragons of efficient markup, but they work. For better or worse: the precedents they’ve set are here to stay. Pave the cowpaths. This is how HTML5 handles rich media with conditional sources, and authors are already familiar with these markup patterns. The potential costs of deviation far outweigh the immediate benefit to implementors.

Any improvements to client-side asset delivery should apply universally. By introducing a completely disparate system to determine which assets should be delivered to the client, improvements may well have to be made twice to suit two systems: once to suit the familiar media attribute used by videotags, and once to suit the img tag alone. This could leave implementors maintaining two codebases that effectively serve the same purpose, while authors learn two different methods for every advancement made. That sounds like the world before web standards, not the new, rational world standards are supposed to support.

The rationale that dare not speak its name

It’s hard to imagine why there’s been such a vehement defense of the img set markup. The picture element provides a wider number of potential use cases, has two functional polyfills today (while an efficient polyfill may not even be possible with the ‘img set’ pattern), and has seen an unprecedented level of support from the developer community.

img set is the pattern preferred by implementors on the browser side, and while that is certainly a key factor, it doesn’t justify a deficient solution. My concern is that the unspoken argument against picture on the WHATWG mailing list has been that it wasn’t invented there. My fear is that the consequences of that entrenched philosophy may fall to our users. It is they who will suffer when our sites fail (or when developers, unable to understand the WHATWG’s challenging syntax, simply force all users to download huge image files).

WE THE PEOPLE WHO MAKE WEBSITES

I’ll be honest: for me, no small part of this is about ensuring that we designers and developers have a voice in the standards process. The work that the developer community has put into the picture element solution is unprecedented, and I can only hope that it marks the start of a long and mutually beneficial relationship between we authors and the standards bodies—tumultuous though that start may be.

If you feel strongly about this topic, I encourage all designers and developers to join the WHATWG mailing list and IRC channel to participate in the ongoing conversation.

We developers should—and can—be partners in the creation of new standards. Lend your voices to this discussion, and to others like it in the future. The web will be better for it.

Read More
1980k 1980k

Responsive Typography: The Basics

Source: Article

In the heat of the relaunch I wrote a quick blog post on responsive typography, focussing solely on the aspect of our latest experiment: responsive typefaces. Without knowing the history of iA, you’d miss some key aspects to the responsive typography and design in our relaunched site. Instead of mashing up all our articles on the matter, I decided to start from scratch and explain everything step by step.

At iA, when we built websites we usually started by defining the body text. The body text definition dictates how wide your main column is, the rest used to follow almost by itself.

Used to. Until recently, screen resolution was more or less homogenous. Today we deal with a variation of screen sizes and resolutions. This makes things much more complicated.

To avoid designing different layouts for every possible screen size, many web designers have adopted the concept of Responsive Web Design. In a nutshell this is the idea that your layout automatically adapts to the screen definition. There are different ways to define it. I like to put it this way:

  1. Adaptive layouts: adjusting the layout in steps to a limited number of sizes
  2. Liquid layouts: adjusting the layout continuously to every possible width

While both have advantages and disadvantages, we believe that adaptive with as few as possible break points is the way to go, because readability is more important than having a layout that is always as wide as the viewport. This is a debatable opinion on a complex matter in itself, but optimal readability requires a certain amount of control over the measure (column width) of the text, and in this regard a liquid layout creates more problems than it solves. More about that another time.

Note: Responsive design already incorporates a lot of macro typographic issues (type size, line height, columns width). So responsive design already incorporates responsive typography in many ways. What we focused on in our first post on the responsive typography on our own site, mainly referred to our use of graded fonts. I’d like to talk about grading in the next post and dive right into the basics of responsive macro typography on the screen now.

Choosing a typeface

The right tone

Sooner or later, you need to decide what kind of typeface you want to use. The choice of your font is mainly a matter of tone, but, as every typeface has it’s own qualities and demands (or forbids) certain treatments, the choice of type has a lot of visual and technological consequences. With web fonts you now have a big choice of typefaces, so finding the one that fits has become yet another challenge.

We designed our own typeface for this website to experiment with graded fonts. We chose a serif because it fits our tone, and mirrors the refinement of our content (or at least that’s what we think). For iA Writer we chose a monospaced slab serif. Because the primary purpose of our program is helping you getting a first draft out, we specifically chose Nitti — a typeface that feels strong and careful at the same time. The decision to use a monospaced typeface also came about because the first iPad’s Operating System didn’t auto-kern proportional typefaces. Instead of using a proportional typeface that would be rendered poorly, we decided to go for a monospaced typeface right away.

Serif or sans serif?

Usually the choice falls between serif and sans serif. This is in itself a complex matter, but there is a simple rule of thumb that might help you:The serifed typeface is a priest, the sans is a hacker. One is not better than the other, but, for various reasons, a serifed typeface has a more authoritarian touch, whereas a sans serif feels more democratic. Remember, this is five thousand years of typographic history wrapped in two sloppy lines, so don’t take it too seriously.

A lot of people still think that for screen typography the question “serif or sans serif” answers itself. Actually, it’s not that easy. Against common beliefs, both serif and sans serif can perform equally well, if you choose a body text size above 12 pixels. Below 12 pixels serifed typefaces don’t render sharply enough, but (and this brings us to the second point) on contemporary monitors 12 pixels is definitely too small anyway.

What size?

The size of your body text doesn’t depend on your personal preference. It depends on reading distance. Since in general computers are further away than books, the metric size of a desktop typeface needs to be bigger than the sizes used for printed matter.

The illustration below shows that the further away your body text, the bigger it needs to be. The two black and the two red A’s have the samemetric size. But since the right pair is held further away, the perceived size is smaller. The red A in the right image has the same perceived size as the black A in the left image:

The further away you hold the text, the smaller it becomes visually. You need to make the text size bigger the further away the text is read, to compensate for a larger reading distance. How big is, again, a science in itself. If you are inexperienced, a useful trick is to hold a well-printed book at a comfortable reading distance while looking at your website to compare.

Graphic designers without Web design experience are surprised how huge good body text on the web is in comparison to printed matter. Mind you, it’s only big if you compare it side to side, not if you compare it in perspective.

If, after increasing the size of your body text to match, the new size irritates you in the beginning, don’t worry that’s normal. However, once you’ve gotten used to it, you will not want to go back to the “standard” small sizes.

We’ve been promoting these “perspectively proportional” font sizes since 2006. Initially, our claim that Georgia 16px was a good benchmark for body text sizes provoked a lot of anger and even some laughter, but now it’s more or less a common standard. With higher resolutions, that standard is becoming slowly obsolete. But more on that later.

Line height and contrast

While body text size can be evaluated with that perspective trick, line height needs some adjustment. With more reading distance and (what we call) pixel smear, it’s wise to give screen text a little bit more line height than printed text. 140% is a good benchmark, but of course, it depends on the typeface you use.

Today it’s a given that you make sure that the contrast is not too weak (e.g. grey text on a light grey background) or too garish (e.g. pink on yellow). Since screen typefaces were designed to be displayed black on white, using dark backgrounds is also somewhat difficult, but these can work if done right. With contemporary high contrast screens it’s also preferable to choose either a dark grey for text or a light grey for the background, instead of a hard black on white. But that is, again, not the most important question.

iPhone vs iPad

A lot of what we learned about responsive typography came from finding the perfect typography for our writing app. When we designed iA Writer for iPad we invested weeks to find the right typographic definition. At the time, the high resolution screen of the iPad was a completely new challenge, and it took quite some time until we understood how it works. When Apple introduced Retina displays for iPhones and later for iPad, everything changed again. We could write a whole book to explain how we got to the iconic look of the iA Writer typeface, but there is still so much to say about more general matters so I’ll cut to the chase.

If you compare our current version of Writer for iPhone with the version for iPads, you will notice that the type size is not the same.

Why different type sizes for iPhone and iPad? If you read the above explanation attentively, you might have already guessed.

  1. While the distance is not always the same, in general you hold the iPad a little bit further away. Whether you use your iPad on the breakfast table, on your knee sitting on your sofa or right in front of your face lying in bed gives a variety of reading distances. This was an entirely new challenge, because the reading distance on desktop and laptop computers doesn’t vary that much. In order to make it work in all instances we chose the furthest reading distance defining the type size. This might result in a slightly bigger than usual type size when you read it in bed, but it’s not uncomfortable, and generally you don’t use a writing application lying on your stomach in bed.
  2. You have less screen estate on an iPhone so you are forced to make adjustments.

Luckily, the iPhone is held closer to the face so being forced to use a smaller type size works out perfectly. From an average reading distance, both iPhone and iPad have a similar perceived type size.

Since the iPhone is held closer, the line height could also be tighter, which again is a necessity because of the smaller screen:

Not everything always works in your favor when you design for the screen. Interaction design is engineering: it’s not about finding the perfect design, it’s finding the best compromise. In our case we had to reduce the line height, and also the gutter and the spacing between characters:

The adjustments were so delicate that if you don’t know it, you won’t notice how small the gutter is. Why didn’t we just get rid of the gutter? The gutter is not an aesthetic matter, it lets the text breathe and helps the eye to jump from line to line. If you think that this all sounds esoteric: no, so far we’ve just covered the basics.

What about desktop computers?

Some people complain about the big font size in Writer for Mac. Just like we had to go for the biggest minimal size choosing the font size for iPad (which is held at different reading distances), we went for the biggest minimal font size on Mac as well. At the time our benchmark was a 24 inch high resolution iMac, where the perceived size is more or less the same as on all other devices.

Since the variety of Mac computers that run iA Writer is finite, we could determine the different possible resolutions. We looked at every possible configuration to make sure that the type size was the best compromise for most machines.

You might ask “Why not just allow the user to choose the type size?” Well, adjusting type size is not a matter of taste, but a matter of reading distance. Since most websites and applications have an overly small type size, new customers would initially choose a type size that they are used to, that is: too small a size, and never experience the full pleasure of our writing app. The main reason is not that we want to force a certain look upon all users: what we want is that iA Writer works without settings without fumbling, that the only thing you can do with it is write. This has been the open secret of its success and changing that would messing with its core. (What we need to improve are the accessibility integration for people with bad eye sight).

Okay then, why not adjusting to the device’s resolution automatically? Wouldn’t that be true responsive typography? That’s right, and we are working on something similar. Now, in adjusting to the resolution, you also have to choose the right optical weight to make sure that the typeface really works as intended with every size and resolution. With the type size and the resolution optics of the font change as well. That’s why iA Writer for Mac, iPad 1/2 and iPad3 all have different grades as well. To explain the full logic behind grading digital fonts and explain the thoughts behind our new Website, I need a little bit more time and space. 

Read More