That bug burned me a couple times before I switched to using 24-hour time exclusively on my devices.
For something that people use everyday, the iOS vertically-scrolling, fake-dial UI is just horrible in terms of usability and aesthetics, and I was glad when they added the ability to summon a numeric keypad with a single tap on the center dial.
The keypad input and interaction is extremely well thought out and efficient for setting the time.
I once had to use a timesheet app that required scrolling around for all the times during the day. Timesheets are already horrendous, why compound that skin-crawling experience with such a horrendous UI? It was so hard for management to corral everyone to get the times entered that they went back to spreadsheets.
Very exciting indeed. I will definitely do a deep dive into this paper, as my current work is exploring layers of affordances such as these in workflows beyond coding.
Thanks. This workflow of 4 prompts, has the benefit of not using the mouse.
I have a friend who uses Photoshop to make posters for bands, the resulting images are better, faces of real people are put in the poster, but he does 1 million clicks every time. I use only Emacs to make the image, much faster workflow, more relaxing, i just edit text most of the time.
Gemini's image generation abilities, especially regarding typography, are in the same ballpark as Ideogram. Ideogram is a little bit better sometimes, vertical text for example trips up Gemini, but Gemini being native multimodal, works very well with text descriptions of images.
Ideogram has an upper limit to the total number of tokens it can accept as a text input. It is not native multimodal as far as i know.
I noticed the difference in this show as well, and I hope it continues.
Besides any conscious philosophy of the producers & writers, perhaps making the show more character driven as opposed to procedural has an impact on the stories. Maybe it's easier to understand when a suspect's rights are being violated (and to not be banal about it) when you're writing a deeper portrayal of the person who wields the power.
The sad thing about all those observations is, all these things surely happen anyway, and lots of people end up in jail anyway, because they don't have good representation to point out how they've been railroaded and they've got a plea bargain dangling in front of them.
People at least know "Nobody read me my rights", "I want to plead the Fifth", and "I want my lawyer" from seeing it on TV. If your arrestee-- or your jury pool-- has a higher level of awareness of common legal gotchas, they'd be able to demand a better deal. "I know you screwed up, the plea deal isn't good enough."
Hey! I'm Scott, a "designer who codes." Principal/staff level product designer & builder working remotely from Atlanta, GA, USA (previously San Francisco & Berkeley, CA).
I'm equally at home in a text editor or Figma file, a stakeholder meeting or collaborative workshop, and I love complex problems. If you're solving something interesting, let's have a chat!
I'm seeking employment or contract. Currently, I consult with engineering teams and dev shops, leading UX and product design for B2B and B2C SaaS products.
→ As a design generalist with 10+ years of experience, I get to wear several hats as we go from idea to launch. Here's a summary of the skills & benefits I bring to the table:
• Product design for SaaS products, focusing on user retention and product led-growth.
-- Exploring opportunities for AI integration is a passion of mine. I love building MVPs or enhancing existing products with features that users demand and stick around for.
• UX & UI design that inspires user adoption & retention. Great UI is based on UX systems thinking that prevents the problems that cause churn.
-- I utilize methodologies like Object Oriented UX, Jobs to be Done, and Design Build Use to unravel complexity and understand what really motivates users and helps them be awesome.
• Designing in the browser using markup & CSS. This workflow allows us to rapidly iterate prototypes for user testing.
-- Handing off clean, shippable frontend code (instead of just Figma files) streamlines the frontend development process. Pesky cross-platform visual bugs never have a chance.
→ If you've read this far, let's connect and see if we'd be a good fit!
Hey! I'm Scott, a "designer who codes." Product designer/builder, consultant and SaaS founder working remotely from Atlanta, GA, USA (previously San Francisco & Berkeley, CA).
I'm equally at home in a text editor or Figma file, a stakeholder meeting or collaborative workshop, and I love complex problems. If you're solving something interesting, let's have a chat!
I consult as a designer for engineering teams and dev shops, leading UX and product design for B2B and B2C SaaS products. As a generalist with over 10 years of experience, I get to wear several hats as we go from idea to launch.
→ Here's a summary of the services & benefits I bring to the table:
• On demand "Design Counseling": Short, live sessions to help teams & execs with unexpected design, product and process issues.
-- Once I'm familiar with your company and product, you can book a session any time you need advice from a staff level designer. (Let's connect now so that I'm GTG when you need me!) :)
-
• Product design for SaaS products, with a focus on user retention and product led-growth.
-- Exploring opportunities for AI integration is a passion of mine, and the optimal AIX will be different between B2B and B2C products.
-- I love building MVPs for early-stage companies as well as enhancing existing products with features that users demand and stick around for.
-
• UX & UI design that inspires user adoption & retention. Great UI is based on UX systems thinking that prevents the problems that cause churn.
-- I utilize methodologies like Object Oriented UX, Jobs to be Done, and Design Build Use to unravel complexity and understand what really motivates users and helps them get shit done.
-
• Designing in the browser using markup & CSS. This workflow allows us to rapidly iterate prototypes for user testing.
-- When I hand off clean, shippable frontend code that's customized to your stack (instead of just Figma files), your devs are free from getting bogged down in visual style and CSS. And pesky cross-platform visual bugs never have a chance.
-
→ If you've read this far, let's connect and see if we'd be a good fit!
I'd love to learn about your product vision and the challenges you're facing. And once I'm in your rolodex (or on your speed dial?) you'll have a design partner at the ready for quick consults or more extensive work.
Living in storm prone regions for most of my life has given me the same habit. All my sensitive electronics get unplugged when storms approach.
Two of my family members have had devices fried by lightning strikes over the years, and not even in regions known for the worst electrical storms.
I keep some portable battery packs handy in case I need to charge a phone, and if I'm working will switch to my laptop and tablet screens.
Of course, one can't conveniently unplug everything (HVAC, big kitchen appliances, etc.) but it's easy enough to safeguard work and lifestyle electronics.
Turning the TV off and listening to the storm is usually a nice change of pace, too.
A defining characteristic of lightning is that it jumps the gaps (ie, all the air between the cloud and earth), so I believe it will jump right over surge protection.
No, unplugging works because cables are antennas. Power cables being disconnected dramatically reduces the ability for the lightning to couple into the device
The device itself usually has shielding, capacitors, transient suppressors, etc… as well as usually designed to make a poor antenna so on it’s own it will be affected much less than when charging
Surge protectors do work, mind you - but only for weaker storms or pulses coming in from the outside power lines. Just by physically being separated from the final device they are limited in how much they can protect from direct coupling
I suppose the difference is that surge protection provides a guide to a possible circuit. Whereas unplugging greatly increases the micro-states where you are not in a viable path.
Cal Sailing Club is a great way to start. You'll learn more quickly on dinghies than keelboats and the skills will benefit your entire sailing career as you move on to bigger boats.
Also check out the Friday night races at Berkeley Yacht Club. Skippers always need crew so it's pretty easy to get a ride. Just hang out at the gate between 5 and 6pm with your gear and say hi!
Check out the Village Homes subdivision in Davis, CA.
It was designed with narrow streets and off street parking so that trees could more effectively shade the pavement. Also, the paved streets alternate with bike and walking paths between rows of houses.
One result of the shaded streets and increased greenery is ambient summer temperature that is noticably cooler compared to other nearby neighborhoods.
The original planners worked with the local FD to make sure their trucks could turn around in the cul de sacs.
As a frontend designer, not a developer, I'm intrigued by the techniques presented by the author, though most devs commenting here seem to be objecting to the code quality. (Way above my pay grade, but hopefully a solvable problem.)
As someone who loves to nerd out on creative processes, it's interesting indeed to contemplate whether AI assisted dev would favor waterfall vs incremental project structure.
If indeed what works is waterfall dev similar to the method described in TFA, we'll want to figure out how to use iterative process elsewhere, for the sake of the many benefits when it comes to usability and utility.
To me that suggests the main area of iteration would be A) on the human factors side: UX and UI design, and B) in the initial phases of the project.
If we're using an AI-assisted "neo waterfall" approach to implementation, we'll want to be highly confident in the specifications we're basing it all on. On regular waterfall projects it's critical to reduce the need for post-launch changes due to their impact on project cost and timeline.[1] So for now it's best to assume we need to do the same for an AI-assisted implementation.
To have confidence in our specs document we'll need a fully fledged design. A "fully humane", user approved, feature complete UX and UI. It will need to be aligned with users' mental models, goals, and preferences as much as possible. It will need to work within whatever the technical constraints are and meet the business goals of the project.
Now all that is what designers should be doing anyway, but to me the stakes seem higher on a waterfall style build, even if it's AI-assisted.
So to shoulder that greater responsibility, I think design teams are going to need a slightly different playbook and a more rigorous process than what's typical nowadays. The makeup of the design team may need to change as well.
Just thinking about it now, here's a first take on what that process might be. It's an adaptation of the design tecniques I currently use on non-waterfall projects.
----------
::Hypothesis for a UX and UI Design Method for AI-assisted, "Neo-Waterfall" Projects::
Main premise:
Designers will need to lead a structured, iterative, comprehensive rapid prototyping phase at the beginning of a project.
| Overview: |
• In my experience, the DESIGN->BUILD->USE/LEARN model is an excellent guide for wrangling the iterative cycles of a rapid prototyping phase. With each "DBU/L" cycle we define problems to be solved, create solutions, then test them with users, etc.
• We document every segment of the DBU/L cycle, including inputs and outputs, for future reference.
• The USE/LEARN phase of the DBU/L cycle gives us feedback and insight that informs what we explore in the next iteration.
• Through multiple such iterations we gain confidence in the tradeoffs and assumptions baked into our prototypes.
• We incrementally evolve the scope of the prototypes and further organize the UX object model with every iteration. (Object Oriented UX, aka OOUX, is the key to finding our way to both beautiful data models and user experiences).
• Eventually our prototyping yields an iteration that fulfills user needs, business goals, and heeds technical constraints. That's when we can "freeze" the UX and UI models, firm up the data model and start writing the specifications for the neo-waterfall implementation.
• An additional point of technique: Extrapolating from the techniques described in TFA, it seems designers will need to do their prototyping in a medium that can later function as a keyframe constraint for the AI. (We don't want our AI agent changing the UI in the implementation phase of the waterfall project, so UI files are a necessary reference to bound its actions.)
• Therefore, we'll need to determine which mediums of UI design the AI agents can perceive and work with. Will we need a full frontend design structured in directories containing shippable markup and CSS? Or can the AI agent work with Figma files? Or is the solution somewhere in between, say with a combination of drawings, design tokens, and a generic component library?
• Finally, we'll need a method for testing the implemented UX and UI against the USE criteria we arrived at during prototyping. We should be able to synthesize these criteria from the prototyping documentation, data modeling and specification documents. We need a reasonable set of tests for both human and technical factors.
• Post launch, we should continue gathering feedback. No matter how good our original 1.0 is, software learns, wants to evolve. (Metaphorically, that is. But maybe some day soon--actually?) Designing and making changes to brownfield software originally built with AI-assistance might be a topic worthy of consideration on its own.
----------
So as a designer, that's how I would approach the general problem. Preliminary thoughts anyway. These techniques aren't novel; I use variations of them in my consulting work. But so far I've only built alongside devs made from meat :-)
I'll probably expand/refine this topic in a blog post. If anyone is interested in reading and discussing more, I can send you the link.
Email me at:
scott [AT] designerwho [DOT] codes
----------
[1] For those who are new to waterfall project structure, know that unmaking and remaking the "final sausage" can be extremely complex and costly. It's easy to find huge projects that have failed completely due to the insurmountable complexity. One question for the future will be whether AI agents can be useful in such cases (no sausage pun intended).
For something that people use everyday, the iOS vertically-scrolling, fake-dial UI is just horrible in terms of usability and aesthetics, and I was glad when they added the ability to summon a numeric keypad with a single tap on the center dial.
The keypad input and interaction is extremely well thought out and efficient for setting the time.