Or try one of the following: 詹姆斯.com, adult swim, Afterdawn, Ajaxian, Andy Budd, Ask a Ninja, AtomEnabled.org, BBC News, BBC Arabic, BBC China, BBC Russia, Brent Simmons, Channel Frederator, CNN, Digg, Diggnation, Flickr, Google News, Google Video, Harvard Law, Hebrew Language, InfoWorld, iTunes, Japanese Language, Korean Language, mir.aculo.us, Movie Trailers, Newspond, Nick Bradbury, OK/Cancel, OS News, Phil Ringnalda, Photoshop Videocast, reddit, Romanian Language, Russian Language, Ryan Parman, Traditional Chinese Language, Technorati, Tim Bray, TUAW, TVgasm, UNEASYsilence, Web 2.0 Show, Windows Vista Blog, XKCD, Yahoo! News, You Tube, Zeldman
The benefits and challenges of running a slow-growing business 22 Feb 2016, 5:58 am
It’s understandable why we’re all so interested in fast-growing businesses, thanks to the drama involved. Will the company in question be able to raise the next round of funding, or will they hit the end of the runway in a ball of flames? Will they be able to hire fast enough to meet their demands, or will the culture implode on itself as people flee to Google or Facebook? Will the company fight off unwanted takeover bids and gain an even bigger valuation, or will they end up regretting not taking the deal? Will the founders end up as multi-millionaires, or just another Silicon Valley casualty? Each option is a juicy as the next, and equally deserving of comment and speculation.
As rapid growth is considered the yardstick of start-up culture, it’s unsurprising that the majority of how-to articles focus on the challenges of running a fast-moving business. So how do you embed culture when your team has grown from 30 to 100 people in less than a month? How do you ensure your infrastructure is up to the task when your user base is doubling every few weeks? And how to you keep the company going when your monthly payroll is in the millions, but your income is in the thousands?
These are all very interesting questions, and ones that will help many budding entrepreneurs. However as the founder of a deliberately slow-growing company, there is a real lack of articles charting the challenge of slow growth; and challenges there are a-plenty.
For instance, when fast-growing companies hit problems around team management, marketing and sales, or HR, they can usually hire their way out of the problem. So they’ll create a new management layer, build a sales and marketing team, or hire HR professionals. The speed they are moving, combined with the funding they have raised, is often enough to power through these inevitable growing pains and get to the other side in one piece.
By comparison, slow moving companies often have to live with these challenges for years, until they have the revenue or work available to justify even a part-time position. Until that point, slower moving companies need to make do with the resources they have available, figuring out ways to self manage, spreading sales and marketing across the management team, or using external agencies for ad hoc HR work.
That’s why smaller companies end up having to focus on their core commercial offering, be that building, maintaining and supporting software if you’re a tech start-up, or offering design and development services if you’re an agency like Clearleft. This means that the traditional business functions you’d find in a large company (finance, marketing, HR etc.) end up taking a back seat; either by being distributed across the whole team, or concentrated amongst a small number of operations staff.
Neither of these approaches is ideal. For instance, you can adopt a “many hands make light work” attitude by distributing common admin tasks across the team. But having experienced (and expensive) practitioners spend time on admin isn’t particularly cost-effective. It can also be a little demoralising, especially if colleagues at larger companies don’t have to do this. The other option is to centralise typical business functions amongst a small group of operations staff. This works well for general admin duties, but can be challenging when you start to need specialists skills like sales, marketing or HR. So in the end you just struggle through until you grow big enough to justify these additional roles.
In fast-moving companies, hiring new people is less of a cultural challenge as the team are used to job descriptions fluctuating and new people joining all the time. In a slow-growth company, people get used to the status quo. It can be hard to relinquish part of your job to a new hire, or suddenly find yourself working under a manger when you never had one before. These changes need to be handled with much more care and sensitivity than the typical start-up environment.
Fast-moving companies obviously have their fair share of cultural problems. Still, the pace of change can make it easier to shape the direction of growth. For instance it’s common to see companies between 20-50 people start to define sets of company values. A fast-moving company may reach this point in a year, when the culture is still fairly new and malleable. By contrast a slow-growing company can take years to reach this point, by which time the culture has already solidified. This solidity has many benefits, such as cultural resilience; but it also makes culture change much harder.
These challenges aside, there are a lot of positives in running a slow growth company. For a start there’s a lot less stress involved, and a lot less risk in something going spectacularly wrong. You have much more time to build the right team, and ensure the culture sticks, rather than just papering over the cracks. More importantly, you get to build a sustainable business under your own terms, rather than those of external funders. The biggest benefit for me is where you place your focus.
If you’re focussed on growth above everything else, it’s easy to sacrifice things like quality of service provision. Sometimes this is deliberate—like hiring less talented staff to meet current staffing needs, or winning less interesting work than you want, just to pay the bills. More often than not it’s just accidental; the natural result of managing so many spinning plates at once. For me, slow growth allows a company to focus on what really matters to them, building a sustainable business focussed on quality.
Of course some businesses need to grow fast in order to gain the economies of scale they need to survive; to jump over that chasm to a world of profitability. There are plenty more businesses who have found themselves forced to grow needlessly fast; either as a result of pressure from investors, or the founders’ own desire for scale. Many potentially sustainable businesses end up growing beyond their means and burning out too soon. I know the goal with many businesses is to “go hard or go home”, but I’d prefer to see 100 successful start-ups making 10 million in revenue each, than one billion dollar unicorn and 99 failed ventures.
We won the moral argument but did we lose the business case for UX? 11 Feb 2016, 7:35 am
When we first started Clearleft 10 years ago, the bulk of my effort was focussed on explaining to clients what user experience design was, the extra value it offered, and why design needed to be more than just moving boxes around the screen. I’m pleased to say that it’s been a long time since I’ve had to explain the need for UX to our clients. These days clients come to us with a remarkable understanding of best practice, and a long list of requirements that contain everything from research, strategy, prototyping and testing, through to responsive design, mobile development and the creation of a modular component library. I think it’s safe to say that the quality of the average digital project has soared over the past 10 years, but so has the effort involved.
This isn’t unusual and happens across all kinds of industries as they develop and become more professional. You only have to look at the advances in health care over the last 50 years to see the dramatic rise in quality. Back in my childhood, the most advanced diagnosis tool was probably the X-ray. These days a whole battery of tests are available, from ECGs to MRIs and beyond. The bar has been raised considerably, but in the process, so has the average cost of patient care.
Over the past few years I’ve seen client expectations rise considerably, but digital budgets have remained largely unchanged. We’ve done an amazing job of convincing digital teams that they need proper research, cross-platform support, and modular style guides, but somehow this isn’t filtering back to the finance departments. Instead, design teams are now expected to deliver all this additional work on a similar budget.
I believe one of the reasons for this apparent lag is that of tempo. Despite the current received wisdom of continual deployment, most traditional organisations still bundle all their product and service improvements into a single big redesign that happens once every 4 or 5 years. Most traditional organisations’ understanding of what a digital product should cost is already half a decade out of date. Add to this the fact that it takes most large organisations a good 18 months to commission a new digital product or service, launch it, then tell whether it’s been a success, and you have all the hallmarks of a terrible feedback loop and a slow pace of learning.
I think another problem is the lack of experienced digital practitioners in managerial positions with budget setting authority. It’s relatively common for digital budgets to be set by one area of the company, completely independently from those setting the scope. Project scope often becomes a sort of fantasy football wish list of requirements, completely untethered from the practical realities of budget.
I couldn’t begin to tell you the number of projects we’ve passed on the last couple of years because their budgets were completely out of whack with what they wanted to achieve; or the number of clients who have asked for our help when their previous project failed, only to discover that the reason was probably due to their previous agency agreeing to deliver more than the budget would actually allow. These organisations end up spending twice as much as they could have done, because they wanted to spend half as much as was necessary—the classic definition of a false economy.
Fortunately once you’ve made this mistake once, you’re unlikely to make it again. Speed of learning is hugely important. In fact I think the organisations that will fare best from the effects of digital transformation are those who can up their tempo, fail faster than their competitors, learn from their mistakes, and ensure they don’t happen again. Basically the standard Silicon Valley credo.
It is possible to avoid some of these mistakes if you hire strategically. I’ve seen a fairly recent trend of hiring in-house digital managers from the agency world. You end up hiring people who will have delivered dozens of projects over the past 5 years, rather than just one or two. These people also tend to be fairly savvy buyers, knowing which agencies have a good reputation, and which are little more than body shops.
As for us practitioners, I think we’ve done a great job of convincing our peers on the value of good UX design and digital best practices. We now need to up our effort getting that message across to the people commissioning digital services and setting budgets, to ensure we can actually deliver on the claims we’ve made.
The Industrialisation of Design (or why Silicon Valley no longer hires UX designers) 3 Feb 2016, 10:00 am
Despite having their roots in Silicon Valley, UX designers are a rare breed inside traditional tech companies like Google, Facebook and Twitter. In some cases they are so rare that other designers claim UX design doesn’t even exist. As a result I thought it would be interesting to explore where this attitude has come from, to see if it can hint at where our industry is heading.
In my (largely anecdotal) experience, Silicon Valley startups are focussed on hiring product designers at the moment. If you haven’t come across the product designer term before, you can think of them as next generation web designers; talented generalists with an affinity towards mobile and a desire to create great digital experiences for all.
While hiring product designers is all the rage at the moment, that hasn’t always been the case. Many early stage start-ups were originally conceived by individuals who considered themselves user experience designers. Many of these individuals have subsequently moved into design leadership roles at companies like Amazon, Adobe and IBM.
UX design is undoubtedly a specialism, focussing on the strategic and conceptual aspects of design, rather than the more tangible elements of UI. In that regard it has close similarities with service design, but is typically scoped around digital experiences. As practitioners traditionally came to UX design later in their careers, either through Information Architecture and Human Computer Interaction, or UI design and front-end development, there are naturally fewer experienced UX Designers than other disciplines.
This lack of supply, combined with increased demand, started to cause problems. Thankfully, a rising awareness around the general concept of user experience (as opposed to the practice of user experience design) saw more and more UI designers explore this space. Designers started to gain an increased sensitivity towards the needs of users, the demands of different platforms, and an understanding of basic interaction design practices like wireframes and prototypes. A new hybrid began to emerge in the form of the product designer; somebody who understood the fundamentals of UX Design, but retained their focus on tangible UI design.
The viability of the Silicon Valley product designer was made possible by several interesting trends. First off, tech companies started to hire dedicated design researchers; a role that UX designers would often have done themselves. They also started to hire dedicated product managers, releasing the need for designers to engage in deep product strategy. The has led many experienced UX designers to follow careers in research and product management, while others have moved towards IoT and service design.
At the same time, the rise of design systems has reduced the reliance on traditional craft skills. Rather than having to create interfaces from scratch, they can now be assembled from their component parts. This has allowed product designers to spend more time exploring newer fields of interaction design like animated prototypes. You could argue that thanks to design systems, product designers have become the new interaction designers.
This is further helped by companies with a vibrant developer culture and a focus on continual release. Rather than having to spend months researching and strategising, you can now come up with a hunch, knock up a quick design, launch it on a small subset of users and gain immediate feedback.
As a result of these infrastructure changes, tech companies no longer need people with deep UX expertise at the coalface. Instead these skills are now centred around management and research activities, allowing the companies to grow much faster than they otherwise would.
However this approach is not without growing pains, as I learnt when chatting to a design team director at one of the big tech companies recently. There was definitely a sense that while the new breed of product designers were great at moving fast and delivering considerable change, they lacked some of the craft skills you’d expect from a designer. Instead, design languages, prototyping tools, research teams and multi-variant testing were maybe acting as crutches, hiding potential weaknesses. There was also a concern that product designers were so focussed on the immediate concerns of the UI, they were struggling to zoom out, see the big picture and think more strategically.
All these concerns aside, it’s easy to see why, inside the tech industry bubble, UX design may no longer be recognised as a distinct thing.
Digital Education is Broken 31 Jan 2016, 10:59 am
Ever since I started blogging in the early the naughties, the emails came in. At first in dribs and drabs, one every few months. However by the end of the decade they were one or two a week. Emails from disgruntled students who had spent up to £9k a year on tuition fees, and even more on living expenses, to find themselves languishing on a course that was woefully out of date.
Their emails were filled with tales of lecturers from engineering, graphic design or HCI departments, co-opted to teach courses they didn’t understand because, well, it’s all just computers really? Tales of 19 year olds effectively becoming teaching assistants on the courses they were paying for, because they knew more than their lecturers. Students walking out halfway through their courses, because they were learning more from their evening gigs than they ever could at school.
It was in this context that Clearleft started our general internship program way back in 2008; to provide the growing ranks of self taught designers and developers the knowledge and experience they needed to succeed in the workplace.
Now don’t get me wrong. I’m not one of those libertarian Silicon Valley types who believe the role of education is to churn out dutiful employees. Far from it. Instead I want my tax funded education system to produce well rounded members of society; individuals who are interested in following their passions and who have been taught the tools to learn and think. Sadly digitally focussed courses, in the UK at least, are failing on even these most basic standards.
As I walk the halls of the end of year degree shows, I’m amazed and saddened in equal measure. The work coming out of digitally focussed courses with “User Experience”, “Interaction Design” and “HCI” in their titles are shockingly poor. The best courses represent the fetishes of their course directors; more art than design in most instances. The worst courses have the whiff of Kai’s Power Tools about them.
You’d be excused for thinking the institutions themselves were broken, were it not for the amazing digital work coming from other courses like Product Design, Motion Design, Graphic Design and even Architecture; work that showed a deep understanding of creative problem solving and an appreciation of the medium. So why are digital courses so bad?
I sit down for lunch with a lecturer friend of mine. He bemoans the state of digital design education, as he attempts to scratch a living on the higher education equivalent of a zero hour contract, working far more hours than he was paid for. Fighting for quality inside an organisation that doesn’t really care; that has too many other stakeholders born in a different era to worry about this “digital thing”.
The students are keen to learn, but how much can you really teach in 6 hours of lectures a week, by somebody who has never designed a commercial website in their lives; or at least the last 6 years? Is it any wonder that the graduates from a 10-week General Assembly course leave with more face time (and a better portfolio) than an 18-month Masters?
And so we continue to do what we can. Answering emails from disgruntled students, speaking on courses, offering student tickets, hosting CodeBar events, and running our internships.
And my lecturer friends do what they can. Running the best course possible within a broken system; hoping (and fearing) digital transformation will eventually disrupt their sector, like other sectors before it.
However there’s only so much any one individual can do on their own, which is why I’m pleased there are events like The Interaction Design Education Summit. I hope that through events like this (and others) we can put pressure on the institutions, improve the quality of courses, and help bring digital education out of the dark ages, in order to give students the learning experience they truely deserve.
Why do agency account managers exist? 26 Jan 2016, 5:10 am
This morning Alison Austin asked the question…
Why do agency account managers exist? #seriousquestion
— Alison Austin (@alicenwondrlnd) January 26, 2016
It’s a valid question and one I’ve often wondered myself. As a company we’ve always been resistant to hiring dedicated account managers, having seen the worst excesses of our industry. I remember chatting to an account manager from a large digital agency, during a BBC supplier evening a few years back. She bragged at how she only got the job because she went to the same private school as one of their major clients and had a lifetime membership to The Ivy. It seemed her job largely involved getting clients drunk.
I suppose this is what account management was like back in the days of Madmen. You would win a big “account” made up of multiple smaller projects, then do everything you could to keep the client sweet. This is somewhat understandable when I remember another conversation I had a few years back, with the marketing manager from a large fizzy drink brand. He explain that their agency selection process involved picking 3 agencies from the NMA top 100 list each year, hiring one, and firing one of their incumbents. In this environment of fear, is it any wonder why agencies would do everything in their power to curry favour?
Fortunately I’ve only experienced this attitude once in my professional life. It was in the early days of our company and we’d just had a really positive meeting with a prospective client, so we invited them to lunch. From the booze fest that followed, it was clear these folks were used to being entertained; as they explained how they judged their agencies on the quality of restaurants they got taken to.
In some ways I could understand the attitude. I got the sense that they weren’t especially well paid (or indeed respected) by their company, so agency entertaining was one of the few perks of the job. However I looked back on the episode thinking that if we had to win work based on our ability to entertain clients rather than our ability to deliver, we would have failed as an agency.
While this attitude may still exist in some corners of our industry, it’s not one I recognise anymore. I like to believe that the majority of projects in the digital sector are awarded based on skill, experience, quality and price. So if the Madmen age is over, what do modern account managers do?
For very large accounts spanning multiple projects, the account manager acts as a constant presence on the project, ensuring the needs of the client are met. They’ll have a good understanding of the big picture challenges the client is facing, and be able to share those insight with the individual teams. They will also be there to help solve problems and smooth over any bumps in the road; essentially acting as the client champion within the organisation.
From the agencies perspective, they are also there as a consultant; helping to develop the client as a longer term prospect. This means working with the client to find new opportunities to solve their problems, possibly in areas the client didn’t know they had experience in.
In smaller agencies, this role is often done by the founder, project managers and project leads. In larger companies it’s centralised amongst a small number of account executives. It’s an important role, but not without it’s challenges.
Speaking with friends at agencies with a strong account management ethic, common gripes often come up. The main one being less experienced account managers promising clients new features with little understanding of what’s entailed. This is especially problematic on fixed price, fixed scope projects where margins are tight.
I tend to hear more concerns around account management from clients, who often feel that account managers are either too overtly sales driven (constantly trying to get them to spend more money) or acting as blockers between them and the people working on their projects.
Too often, these problems are caused by a misalignment between the clients needs and the way account managers are being judged and remunerated. Either that to it’s a reflection on poor agency practices and an attempt to keep clients at arms length, possibly to hide an ever changing team of junior practitioners and freelancers.
As such, while I understand the benefits of larger agencies hiring a small number of very experienced account managers, with a solid understanding of the industry, a large number of junior account managers always feel like a bit of a warning sign to me. However as somebody who has never really experienced account management first hand (good or bad) I’d love to know what you think?
Can the balance between divergent/convergent thinking explain mid career peaks? 25 Jan 2016, 5:47 am
Divergent/convergent thinking is a fundamental part of the design process, and something most experienced practitioners are familiar with. Essentially the design process is broken down into two phases; a phase where you open up the problem space and explore as many different directions as possible; and a phase where you start analysing all the possible solutions you’ve come up with, in order to settle on the perfect answer.
It’s easiest to see this approach play out in the world of branding; the designer filling their notebook with pages and pages of graphic experiments, before selecting a handful that meet the brief in different and interesting ways. Rest assured that all good designers work this way, from physical product designers cycling through dozens of concept drawings, through to interface designers exploring countless different UI variations.
If you’ve been involved in a well executed brainstorming session, you’ll understand the benefits of this approach; allowing you to explore a large number of ideas, without the dampening effect of analysis.
You may have also experienced a badly run “brainstorming” session where ideas are debated and discarded as soon as they are created. This approach not only slows the process down, severely reducing the volume of ideas that are generated, it also discounts potentially novel ideas before they’ve had chance to breath.
This process always reminds me of classic crime dramas where there detectives post all of the clue up on a wall in search of patterns. The mediocre detective will jump to the most obvious conclusion first, spending the rest of their time trying to prove their hunch right (and often arresting the wrong person in the process). Meanwhile our hero spends their time assembling clues, exploring the problem space, and analysing all the possible angles, before coming to the less obvious, but ultimately correct conclusion.
So as a designer, how do you decide how much time to spend exploring the problem space and generating ideas, versus honing in on the end solution? And what are the risks involved in spending too much or too little time on either activity?
In my experience, novice designers tend to jump to the convergent phase far too quickly. This is partly because they’ve been mis-sold the idea that design is driven by that elusive spark of creativity, rather than a deeper process of problem solving. Creative ideas are viewed as rare and precious things in need of immediate nurture.
Early in your career, all your ideas seem fresh and novel, so you’re eager to get stuck into the execution, especially as your craft skills are more developed than your ideation skills. Essentially you end up running from an area you don’t feel comfortable with, to one you better understand. I’ve seen plenty of novice designers abandon potentially interesting ideas in favour of more fully fleshed but obvious ones. These ideas may not seem obvious to the designer in question, but more experienced designers will have seen the same tropes time and again.
Good design educators work hard to prevent their students for jumping to the most obvious conclusion, running exercises like “100 designs in a day”. As the name suggests, the students are encouraged to come up with 100 versions of a common design problem, like designing a new chair. The first fifty or sixty designs are usually easy to come by and are typically discarded for being too obvious—variations of designs they’ve seen many times before. It’s the next twenty or thirty designs that get really interesting, where the designer has to really think about the problem and come up with something truly novel.
The “100 designs in a day” exercise is a type of “design game” that acts as a “forcing function”; essentially a way of forcing you to think divergently. The best designers will tend to have an arsenal of similar activities in their toolbox to draw upon when needed.
I’m always nervous when I come across designers who appear to be driven by “creativity” rather than process. Eventually this unbounded creativity will dry up, and they’ll be reduced to aping the styles of other designers, unable to explain their designs other than “it felt right”. Instead, like my old maths teacher, I like to see the workings out; to understand how the designer got to the current solution, and make sure they could replicate the process again and again.
If novice designers spend too little time exploring the possibility space, experienced designers often spend too long; trying to explore every nook and cranny and gather every piece of evidence possible before starting down the route to a solution. This is evidenced by the classic Einstein quote many senior designers love to re-iterate; “If I had only one hour to save the world, I would spend fifty-five minutes defining the problem, and only five minutes finding the solution.”
While it’s true that any nontrivial problem requires a good amount of divergent thinking, spending too much time exploring the problem can form a mental trap akin to analysis paralysis, making it difficult to come up with a solution that solves all the problems you’ve uncovered. This is one of the reasons why large organisations often benefit from enlisting the help of external consultants who can bring a fresh perspective unencumbered by years of exploration and analysis. But these external agents may only have a 6-month grace period before they get indoctrinated into the organisation and start getting similarly overwhelmed.
Architect Eliel Saarinen said it best when he famously said “Always design a thing by considering it in its next larger context - a chair in a room, a room in a house, a house in an environment, an environment in a city plan.” Novice designers regularly jump straight to the chair, ignoring the room it’s in, while very senior designers get so obsessed with the room, the house and the city plan, they ignore the impending seating needs. The logic often seems to be “how can I possibly design a chair, when the city infrastructure to deliver the chair is broken!”
From my experience working with students, interns and junior designers, novices often spend less than twenty percent of their time on divergent activities, and end up obsessing over the convergent process. This works for relatively simple projects, but fails for anything remotely complicated. By contrast, many senior designers will spend up to eighty percent of their effort on divergent thinking, leaving their production team to do most of the converging. Although the ultimate figure depends on the problem you’re solving, in general I think the balance needs to be closer to 60/40 in favour of divergent thinking.
If the idea that designers start their careers focussed on convergent thinking and become more divergent over time holds true, this may help explain why many designers seem to reach a creative peak around 8 years into their careers. At this point they have got out of the habit of rushing to the most obvious solution, and are spending a good deal of time understanding the problem and exploring a variety of leads. They still have enough focus on delivery to reserve enough time for convergence, thereby avoiding the divergence trap.
Design like a Michelin Star Chef 19 Jan 2016, 6:22 am
The England of my youth was a desert for good food. The difference between a “good” restaurant and an average one lay mostly in the surroundings; that and the use of slightly more expensive ingredients. But white cotton table cloths and snooty service weren’t enough to hide the mediocre food that lay therein. That’s why I used to relish my regular trips overseas, to eat at restaurants where the owners actually cared about what they were producing.
Jump forward 20 years and the landscape has changed dramatically. England is awash with top-end restaurants and Michelin Stars abound. Quality cooking now permeates popular culture, thanks to shows like Master Chef. This attitudes has trickled down to neighbourhood bistros, mixing locally-sourced produce with the skill of the chef. As a result we’ve developed the vernacular and know when something doesn’t make the grade; we’ve basically become a nation of food critics.
We still have average restaurants, but they are few and far between. Instead, a rising tide has raised all boats. Even pubs, and more recently the humble pizza restaurant and burger joint, have gone gastro. The UK really is in the midst of a food revolution. So much so that I now look forward to returning from overseas trips, because of the food.
In this environment, it’s no wonder that a recent show on Netflix charting some of the best restaurants in the world was an immediate hit amongst my colleagues. The level of passion and craftsmanship the chefs demonstrated was amazing. These chefs sweated over every detail, from the provenance of the produce, to the service experience. Experimentation was key, and you could tell that every dish they produced looked and tasted fantastic, elevating cooking to an art form.
This focus on quality struck a chord with me as a designer. It’s an attitude that’s been baked into Clearleft from the outset, hiring people who really care about the details and want to go the extra mile, not just for our clients or their users, but for the field itself. Like great chefs, designers find it difficult to explain the extra effort that goes into an amazing composition. It’s actually fairly easy to knock up something palatable if you have the tools to hand. However it takes a huge amount of effort to craft something noteworthy.
Where quality is concerned, whether it’s with food or design, it usually takes 20% of the effort to deliver 80% of the quality, and a further 80% of effort to deliver the last 20% of quality. I call that the effort to quality curve, and most people stop where the differential is highest. But it’s the last 20% that elevate a dish from average to amazing.
Sadly the current design climate reminds me of 90s cooking. The big studios, like the big chain restaurants, are more interested in delivering a consistent experience rather than a quality one. So they put processes in place that ensure minimum quality, but do nothing to foster true creativity. Many agencies and individuals come off looking like fast food joints, using frameworks and templates to speed production and deliver a slew of me-too products lacking in love or a sense of craft.
By comparison, when I look around our studio—and others like ours—I see the similarities between a kitchen full of expert chefs. Each one with their own areas of expertise, but brought together through a passion for good design and quality code.
However in a world dominated by fast food and even faster design, it’s often difficult to explain the difference to customers—why a meal by a Michelin Star chef is worth more than a chain restaurant. It’s difficult because, unlike the restaurant world, most customers haven’t seen the effort required to deliver quality; haven’t sampled enough dishes to tell bad from good.
The only way to combat this is for designers to make their effort visible as well as their output; to educate customers on the importance of ingredients and technique; and to design like a Michelin Chef.
In defence of the hamburger menu 13 Jan 2016, 5:22 am
It’s interesting seeing how quickly hamburger menus have turned from handy UI element to social pariah. Rarely a day goes by without some young designer pronouncing hamburger menus the biggest UI crime since Clippy. They cite a raft of arguments why hamburger menus are bad, from the theoretical (it’s mystery meat navigation that users don’t recognise) to the anecdotal (three of my five usability subjects didn’t know what it was when I asked), to the statistical (60 percent of the users on my site don’t interact with the hamburger menu).
All these arguments hold water and in normal circumstances I’d agree. After all, it’s not the most immediately obvious icon, and the last thing any designers wants to do is cause undue stress or confusion. However I think there’s an innate Britishness about me that feels the need to stick-up for the underdog and protect something that feels like it’s been getting an unnecessary kicking.
Ignoring its longer history for a second, the Hamburger menu is part of an emergent design language that resulted from the rise of responsive design. It solves a difficult problem (how to represent a potentially large number of menu items on a small screen) in a relatively neat and tidy way.
Agreed that the icon doesn’t clearly explain what it does, but then neither does the pause button on a typical media player. One of the main reasons we’re able to use this symbol unlabeled is the fact that it worked its way into our cultural repertoire thanks to continued repetition on tape decks and VCRs.
Had Twitter existed in the 80s, I’m sure a group of well meaning designers would have tried to shoot down the humble pause button—and it’s cousins “stop” and “record”—with similar arguments. However I think they would have done so from an oversimplified understanding of what usability is.
If you go back to the early definitions of usability, they state that a usable interface is one that is learnable, efficient, memorable, produces low errors, and is satisfying.
I’d argue that the pause button on a VCR is learnable (once you’ve pressed in once you know what it does), memorable (the icon is simple and easy to recall) and produced low error rates (if you accidentally press it you can easily recover with little negative effect). It’s also relatively efficient (it’s one press after all) and the action on an old style mechanical VCR was a tiny bit satisfying. So as a result of these qualities, the pause button became part of the global iconographic lexicon.
I believe the hamburger menu shares many of these characteristics, and has the same opportunity to become a globally recognised icon through consistent exposure. However this will only be possible if we stop showing off to our friends by “hamburger shaming”, and embrace the plucky icon for what it is, warts and all.
Star Wars Plot Summary [Spoilers] 17 Dec 2015, 7:01 am
Early in the movie we’re introduced to a young orphan eking out a living on a dust-bowl of a planet. This orphan comes in contact with a friendly and charismatic droid who has just escaped from a big battle with the forces of evil. The droid is carrying secret plans which need to be returned to the rebel base. The young orphan meets a wise guardian who was once a significant figure in the rebellion, along with a wise cracking foil and a wookie named Chewbacca. Together they attempt to return the friendly droid to it’s owners.
The orphan turns out to be talented pilot and starts showing an interest in the force. Later the orphan inherits a “lightsaber” that used to be owned by a young Jedi named “Skywalker”. Unfortunately the stormtroopers track them down to an exotic cantina inhabited by all kinds of strange creatures, and a shoot-out ensues. Our main protagonists escape on the Millennium Falcon, a ship which did the Kessel Run in 12 parsecs, and head towards the rebel base.
At the same time the female lead is captured by a masked enemy versed in the dark side of the force, and is taken to a giant weapon the size of a planet where she is tortured for information. The plant sized weapon destroys a number of planets before honing in on the rebel base.
Having experienced the destructive power of this weapon, our heroes go in search of the female lead. While our wise-cracking foil helps our female lead escape, the wise guardian attempts to shut off the field generators. In the process we marvel at the lack on safety barriers on battle station bridges and wonder about the general health and safety aspects of being a stormtrooper. Once the generators have been disabled, our wise guardian comes face to face with the masked enemy, while the other heroes look on helpless. The wise guardian is struck down by the evil lords lightsaber in a failed attempt to redeem his soul. Our heroes fight their way off the battle station and escape to the rebel base.
The brave pilots of the rebellion mount a raid on the planet sized battle station, flying their X-wings down canyons, fending off tie-fighters, while being shot at by cannons. It’s all very exciting. Our brave pilot manages to destroy the weapon seconds before it can destroy the last rebel base, but not before the evil lord manages to escape. Everybody celebrates.
Throughout this adventure, our young orphan has developed an impressive control over the force, which they presumably inherited from their mysterious parents. In order to learn about these powers, the orphan sets off to a distant planet to meet the last remaining Jedi and become a Jedi Knight themselves.
The End
Product shearing layers and the "double-diamond" approach to design 26 Apr 2015, 8:32 am
The organising principles of agile are based around the needs of developers. So processes and systems are broken down into units of functionality and fed to development teams along a pipeline of delivery.
We all know that estimating big tech projects is a crap shoot, so the focus with agile is on just in time decision making, problem solving and optimising through-put. With so many unknowns this is a much more rational and realistic approach than trying to plan and estimate everything up front.
Unfortunately, while this organising principle performs well for developers, it can be problematic for designers who need to tackle things as part of a coherent system, rather than a series of functional chunks.
Agile allows for iteration, so one way of tackling this problem is to give in to the inherent uncertainty and allow the design to emerge over time. So you slowly end up aquiering design debt, with the hope that you’ll have time at the end of the project to bring all the disparate pieces together. Sometimes this happens, but often this gets relegated in favor of more functionality.
I believe this is one of the reasons why so many established tech companies struggle to produce a holistic user experience and end up creating a disjointed UI instead. Lots of small pieces loosely joined rather than a coherent and considered system.
Lean ux has attempted to add a layer of design thinking to the process. However the minimal shippable unit is still based around features rather than systems and stories rather than journeys or experiences.
With this method experiments are run in the wild on real users rather than on paper. This has the benefit of giving you real rather than aproximated feedback. However it can also lead to significant amounts of technical debt and a poorly considerd product in the hands of your users for longer than is absolutely necessary. This probably doesn’t matter in a new start-up with just a few users, but can be much more damaging for an establish company with millions of customers expecting you to deliver of your brand promis. How often have we seen the MVP end up being the final product?
By comparison, the traditional UX approach sees problems iterated on with paper and prototypes rather than live users and working code, allowing you to iterate in a faster and more cost effective way. The trick is for these sketches to remain as recommendations rather than specifications, which is often not the case.
Of course these two approaches aren’t mutually exclusive, but I’d like to see Lean companies do more of their learning with prototypes and less at the expense of real users. Not everything has to be deduced from first principles, and there is a huge canon of design knowledge to draw upon here.
The tension between design and development reminds me of the famous shearing layer diagram which Stuart Brand used to explain the different speeds at which buildings evolve - the interior moving faster than the shell.
While developers find it easier to break off pieces of functionality, encapsulate them and then tie everything together as they go, designers require a higher vantage point in order to map out the entire system to do their jobs well. The business often appreciates this vantage point as well.
In typical, highly functionality agile environments, a single product will be broken down into multiple product teams with their own product manager, design lead and team of developers which they are tasked with servicing. These smaller “products” will usually focus on “slices” of a user journey rather than the whole experience - another reason why many products feel somewhat disjointed.
The speed of progress is dictated by the speed at which the developrs can ship products, forcing designers to operate at a pace which they’re often uncomfortable with. This also forces them to focus their talents on production and delivery rather than strategic thinking, which may be fine for junior designers but can be both isolating and demoralising for more experienced practitioners.
Ironically, a small group of designers are usually better able to service the needs of a large number of developers by working as a team across the whole proudct, rather than being separated out into individual product teams. However this approach is often branded as “waterfall” and dismissed by many agile proponents.
Now if you have a fairly unreconstructed design team who are more comfortable dictating rather than collaborating, they may have a point. The goal here isn’t to hand a spec document to the Dev team in the form of a set of wireframes or working prototype and simply get them to build what you’ve specified without question.
However I do believe we’re entering into a post agile world where prouduct can adopt the best parts of waterfall and agile, without having to pick one or the other and stick to them dogmatically. Instead, let’s be aware of the differing shearing layers and adopt and approach that works for all parties.
In his recent IA Summit talk, Peter Merholz spoke about the “double diamond” approach, which is the method I personally favour.
At Clearleft we typically undertake an initial definition phase where we talk to the business, interview potential users, develop a product strategy, sketch out the key user journeys and create the basics of a design system. This system isn’t set in stone, but provides just enough of an overview to set the general direction and avoid us aquiering too much design debt.
During this initial phase of the project, the team can be fairly small and efficient. Maybe just one UX designer, one UI designer and one creative technologist. We can test ideas quickly on paper or with low fidelity prototypes and discard work that proves ineffective. We’re iterating, but at a pace dictated by the needs to the product, rather than the artificial tempo of a “sprint”. We’re not looking for perfection, but hope to get the main design problems finished to a level of fidelity all parties (business and tech) are happy with.
Once the plan is fleshed out, we’re more than happy for the tech team to work in whatever way best suites them, be that Scrum, Kanban or some other variant of agile. With a better understanding of the whole, it becomes easier to break things down into chunks and iterate on the individual stories. Designs will change, and the language will evolve, but at a pace that works better for both parties.
This “double diamond” approach places the needs of designers at the forefront of the first “diamond” and sees them leading the initial strategic enguagement. The second “diamond” flips this on its head and sees design servicing the needs of development and production.
I’m sure some people will claim that this to be part of the agile canon already, be that “iteration zero”, “dual track agile” or some other methodological variation. For me I really don’t care, just as long as design gets to dictate the process during the formative phases while development drives production.
Page processed in 0.278 seconds.
Powered by SimplePie 1.3.1, Build 20130515110641. Run the SimplePie Compatibility Test. SimplePie is © 2004–2016, Ryan Parman and Geoffrey Sneddon, and licensed under the BSD License.
