Hacker Newsnew | past | comments | ask | show | jobs | submit | Nycto's commentslogin

Users on mobile can’t make their screens that big, and the average users on desktops don’t expect a webpage to work when they resize their browsers that small. Trying to make both work with the same design can have a negative impact on your customers, from both a usability perspective and by increasing page weight.

Instead, my advice is to create individual designs for each, share when it makes sense, but actively diverge when it’s good for your customers. There doesn’t need to be a single version of a page.

Your mileage may vary.


There are certainly trade-offs to consider which is one of the reasons UX design demands so much expertise no matter how many HNers dismiss it as some trivial practice (that they, conveniently, never want to do).

Maintaining N versions of your application has costs that aren't necessarily great for your customers either. In my experience it usually cashes out into one version (either mobile or desktop) getting all the support and features while they slowly drip down into the other versions. Meanwhile a responsive design can have the upside of forcing support and feature rollout for all devices simultaneously.

None of this is easy.


People tend to dismiss UX 'experts' because frequently they end up being the ones who destroy perfectly good interfaces based on trends or similar. The principled ones who adopt a scientific approach are much rarer.


I don't think it is an issue of "scientific" versus "unscientific"

It's really an issue that many UX designers don't know how browser rendering works, so they design static pages as if they were printing in a magazine.

Pixel perfect mocks are terrible for designing responsive UIs. Trying to build pixel perfect pages in a browser is impossible. Somehow these designers get through school with zero understanding that designing for web is different from designing for print.


It's 2023, I don't think that's really the case any more. If anything, designing for print is now the part of the discipline that has to suffer through "web-isms".

The reality is just that designers gonna design - and designing is often an unscientific craft, pursuing aesthetic values before practical considerations. Google and Apple designers are well-paid and experienced web-heads, and still they led us into a land of well-padded desperation.


It's 2023 and I am still explaining how responsive design works to new design grads that my company just hired so there's definitely some failure somewhere.


A great recent example is the Slack redesign that didn't improve any user flows and lowers the information density. And that activity badge with sticky notifications. (Shift + ESC is your friend here.)


You can’t even exit a search. And god forbid you click a notification while doing a search and have to click “back” 500 times to get to a useable interface.


> People tend to dismiss UX 'experts' because frequently they end up being the ones who destroy perfectly good interfaces based on trends or similar.

yep. i guess that reddit hired one of those.

old.reddit.com is awesome and stood the test of time, the new reddit is awful and slow (and i hate it).


There's two things here.

One, not all Reddit users prefer old.reddit (I do).

Two, Reddit aren't designing for users, they're designing for advertiser's to push adverts at users.

Wrt the second point, this means designers aren't designing to the brief you would give them. Like when engineers design obsolescence into a product (it's purposefully inferior for the end user).

Any idiot can see it's bad user experience to keep forcing a user to a design they don't like, but it's not for UX reasons that they do it. The trick is keeping UX good enough.


> Wrt the second point, this means designers aren't designing to the brief you would give them.

To extend that: note that the very companies that spend most money on UI, that hire the experts and pay them well, that set the trends for entire field of UX - are all companies whose primary business is user abuse - advertising, high engagement, etc. That's what they pay the UI/UX experts to optimize for, and that's what ends up leaking into the wider field - leading astray people who are trying to build things beneficial to their users/customers.


The ones who adopt a scientific approach are by far the worst. Design is ultimately all about how things ought to be, an act of judgement, meanwhile science is wholly unsuitable for such questions, since it only tells us what is, which following Hume, cannot on its own lead to conclusions about what ought to be.

You get a sort of garbage-in-garbage-out effect if you apply science to a field like design, where it only serves to amplify your own convictions, as what is being fed into the scientific process as unquestioned assumptions inevitably fall out of it as conclusions.

At best you get KPI driven design, which is a vehicle for enshittification, not for building great design.


I'm not following. Science gives us good guidance on what works well or will let us achieve our goals, all the time. It's basically the whole point of doing it at all.

I took the poster as meaning UX that considers the results of, and perhaps even performs, actual user testing & observation, to decide what works and what doesn't. Like operating system vendors used to. I'll grant that "scientific" UX that's just incompetent (99% of the time) application of "telemetry" and A/B testing is awful. But that—and the other bad kind that's just trend-following, personal preference, and whatever will get the best reaction in a design presentation meeting full of non-experts—aren't what I understood as being advocated.

The good kind performs & pays attention to science.


What science doesn't give good guidance is how to select those goals in a vacuum. The goals at best end up being a version of someone's personal opinion, since there that's the only form of opinion we have access to.

Any opinions you get out of the scientific method were put in there by the person designing the experiment.


The core of science is iteration based on experiment and hypothesis generation. The scientific approach to UX is simply a hypothesis that a UX change is superior, and an experiment with a measure as to whether or not this is true. Yes, of course you can optimise scientifically for user-antagonistic KPI's (no need to invoke Hume, his point is true and also useless) but the alternative to not adopting a scientific approach is literally just opinion which on the whole is far worse (would you want your airline cockpit's UX to be designed based on a designer's opinion, or real scientific approaches to how people process information and intuit controls?). Of course one would take a good designer's opinion over some MBA-ridden process but a good designer is likely intuiting what would be validated by a more scientific approach anyway.


Can you relate that to specifics from the article? To my (admittedly non-designer) eyes it appears to be a great example of how science can be used to improve design, and I happen to agree with the findings presented, so I'm curious where you see this breaking down.


The tacit assumption being made in the article is that good design shouldn't frustrate the users.

I don't disagree with this, but it's none the less an assumption that went into the study, and likewise a conclusion that fell out of it.


Nobody here is disputing we have values that are not scientific.


> The ones who adopt a scientific approach are by far the worst

I strongly disagree.

Design without considering all of the HCI research that has been done is what you call "garbage-in-garbage-out." We already know how humans perceive information, what makes things salient or invisible, and so on, yet the current design trends completely disregard that with flat UIs and trendy designs that have poor usability.

> At best you get KPI driven design, which is a vehicle for enshittification, not for building great design.

No, you just get trendy design, not usable design.


You're more than correct.

In practice, working in small and medium organizations, I have met very few UX designers. Instead I have met plenty of graphical designers that know almost nothing about UX design. I've been at places where I - as a backend developer - know more about practical UX design than anyone on the design team.

I think the reason why we have "bad mobile first design with awful desktop UX" is because very few of the people designing these experiences are UX designers.

I was surprised the article didn't highlight the horror show that is Vector22 at Wikipedia, a design so colossally bad that after three years of suck costs the only path to saving face was to make it the default theme for all users: "Mission Accomplished!"

https://en.wikipedia.org/wiki/Wikipedia:Vector_2022


Not getting it. They increased readability, by limiting maximum line length! That is a colossally good thing. Surely, that's like a graphic design 101 kind of decision. (It's a design rule that significantly predates "UX").

The issue at hand is that overly long lines reduce reading speed and comprehension of the content[1]. The optimum length for a digital line of text is somewhere between 66 characters per line and 100 characters per line. I personally use the 100cpl rule. For reference, this HN page has ~185 characters per line on my 1920x1080 display at default scaling.

I do actually remember un-minimizing my browser in order to improve the rate at which I could read the text of ur-Wiki pages.

And then they provided an escape for old men shaking their fists at the sky. Given a choice, I would, without hesitation, choose the new design.

[1] https://en.wikipedia.org/wiki/Line_length#:~:text=characters...


Agreed. I had the pleasure to work with talented specialist UX designers early in my careers, and their designs were really fantastic. They also worked really closely with the developers, both to understand the medium they were designing for and as a first line of feedback before things got to real users/clients.

Unfortunately some of the designers I've worked with more recently were primarily graphic designers without a UX background, and actually became an impediment to good design because they were given authority over it despite not really know what they were doing.

I think it's probably an unfortunate consequence of there being more demand for UX designers then there are good UX designers, and simultaneously being a lack of jobs available for graphic designers. And a lot of hiring companies not really understanding what makes a good UX designer.


I read that page, then browsed Wikipedia a little on desktop. It's a site that I use very often and I didn't notice anything weird. I could have sworn that it has been the theme of Wikipedia for at least 10 years.

I also checked if I had created some rules for that site in Stylus and uBlock Origin, nothing. For once I'm lucky that a change didn't destroy one of my workflows. One could say that if I didn't notice the transition they could have spared themselves all the work, or one could argue that they performed a perfect job.

Anyway, I get directly to the page I need from Google. I found several threads on Reddit complaining about the change and this one https://www.reddit.com/r/wikipedia/comments/10g2cir/im_prett... I see a different usage pattern "all I had to do was open the site and use the search bar. And then from there it was easy to get to the main page, current events, etc." The home page, current events? I'm sure I never heard about current events before now and about the home page, I know that there is one but the search bar of my browser is closer to Wikipedia's internal pages.


Without going into details about Vector22, it's certainly better today than it was at launch. It still has a very poor floating ToC UX.


How is Vector2022 bad? Text columns that are 100 words wide is bad desktop UX.


Huge engineering cost though. If most of your customers are on mobile, makes sense to optimize for mobile, and hope its "good enough" on desktop.

At the end of the day its all an ROI problem (as are most things)


The cost need not be huge. Most of the costs should be content, and just the theme is different. However even ignoring that, two themes can be hard if you do them independently. However often only a few changes to one theme are needed to become acceptable, and that is good enough. This in turns means you can limit costs: spend 1 week on making a good desktop theme will already make a big difference as you get the worst offenders fixed.


One problem with two separate designs is deciding when to show one vs. the other. This gets especially tricky when people share links. Wikipedia, for example, has two different URL's: one for mobile and one for desktop. How often do you get links to the mobile version instead of the other?

And if you keep the URL the same but serve different output depending on the browser, then you get inconsistent behaviour between two different devices.

Nailing the UX for mobile and desktop is actually pretty damn hard.


I think you're trying to make a very nuanced point, and I tend to agree that there are different needs for different viewport sizes. But I think it's important to note the difference between the design and the technology to build it. The technology should, IMO, as much as possible, seamlessly switch between the various layouts when it makes sense from a viewport size perspective. Definitely don't want, IMO, to deliver completely different sites based on device type/site from a technological point of view, we've tried that before and it isn't a good idea.

I also think one of the things good designers do is to take this into account, and make pages that are built up of components that work at various sizes, not just scaled up from mobile. In addition, a good designer will setup the page design such that it can scale up and down nicely from one viewport size to another.

So, while I don't 100% agree that you need "individual" designs for each, I do think you need a designer that takes the different viewport sizes into account and provides the appropriate adjustments for each. And developers that are skilled at then building those pages.


> Users on mobile can’t make their screens that big

I can connect my Librem 5 phone to a screen/keyboard and I get a full desktop.


It's not mobile when it's connected to a screen / keyboard is it?


It's never mobile: It runs a desktop GNU/Linux. Dedicated apps are convergent, i.e., automatically change depending on the screen size: https://puri.sm/posts/converging-on-convergence-pureos-is-co...


I split my laptop screen vertical with usually a browser on either side. Occasionally a web page will render itself as if on mobile because it thinks I'm on a narrower screen.

I agree with comment above that it is very hard to make one website responsive to multiple screen sizes.


Except you are on a narrower screen. Sounds like correct functionality to me.


You can request the desktop site on mobile, though


From an SSR perspective this seems rather hard. How do you correctly identify the user's device at serve time?


People say SSR like it's a new concept, but this was how it worked for a long time.

Guess based on user agent (or other fingerprinting metric of choice), redirect to guessed site, provide user the option to override when the page appears, remember the choice in cookie (or local storage).

Though personally I think you can do a lot with responsive CSS if you try hard enough - that is my preferred option.


> redirect to guessed site

I always wondered about that. What's the point of redirecting instead of serving a different template on the same URL?


That works, you'd just have to put the setting in a cookie rather than local storage.


Why? If you can detect mobile user-agents on the first page load, can't you do it in the next page loads too?


Devices should be truthful in the type of content they request. If your phone somehow tells my website that it’s a tablet or a laptop then you should reconsider the intelligence of who has developed your software


> Devices should be truthful in the type of content they request.

I think that websites should assume that devices are being truthful. I should be able to request the desktop view on my phone or request the mobile view on my computer. The former I can do sometimes, the latter I can only do with developer tools (and usually doesn't work because the website detects that I'm on desktop!). Browsers could add a header to switch to the mode in which the website dynamically readjusts based on actual device parameters like window size, but by default I need the view to be what I requested regardless of my window size and device type.

You know how Wikipedia has no table of contents on mobile? I made my browser request the desktop site by default so that I could see the table of contents and don't have to tap to open the article sections. (Unfortunately, Wikipedia changed its desktop view UI by moving the table of contents into a hamburger button. On mobile the desktop view forces me to tap the hamburger button to view a blocking popout of the table of contents, while on desktop the contents are automatically opened in a sidebar.) If Wikipedia had forced a dynamic design on me to restrict me to the mobile view on mobile, then I would've wasted time opening article sections to decide whether I wanted to open them in the first place.


My phone often does tell remote sites it is a desktop because as bad as the desktop experience is, often that is the only way to get at something. (I don't want an app for my doctors office - I check it after my yearly physical and the rest of the time it takes up space I could use for another picture)


Does it do that automatically or does it have an option?


If you can't, give me at least a choice


Obligatory links for anyone that is interested in learning more about Bézier curves:

https://pomax.github.io/bezierinfo/

https://pomax.github.io/bezierjs/

https://github.com/Pomax/bezierjs



Might I also add: https://youtu.be/aVwxzDHniEw?feature=shared Which works through them from first principles with beautiful animations.


I personally really like this video, but it's interesting how when I showed it to a group of CS undergrad students, they all loved the animations but performed poorly on a post-video quiz about the topic. I knew the content already, but I wonder if this video is really as educational as it appears to be.


Her other video on the subject https://www.youtube.com/watch?v=jvPPXbo87ds&t is far more interesting and informative imo.


I’m guessing the venue you’re talking about is tiny. There are only 1 or 2 major venues in the US that aren’t owned by live nation.

Anecdote != data

Cyber did an episode that covered this in more detail, btw: https://www.vice.com/en/article/z348k3/cyber-why-concert-tic...



Someone on Reddit added Nim to the table, which I found interesting:

https://uploads.peterme.net/nimsafe.html

(Source: https://www.reddit.com/r/nim/comments/maj1lz/nim_safety_in_c...)


Which is a bit silly because Nim is a garbage collected language, and hence competing in a different league.


The league of being safe, without use after free, letting the developer focus on productivity, while providing the language features to do C style programming when required.


Isn't the Nim GC optional?


It is optional in the sense that D's GC is also optional. Technically true but you have to go out of the way to make it work for you, average libraries off the shelf cannot be easily utilized.


This is a response to both you and the grandfather comment.

Nim now has ARC/ORC, which is a reference counting scheme similar to Swift. This is not a "generational GC" like Java, Go, or D.

The entire standard library and most average libraries, "just work" with it, and it will be the default near future.

https://nim-lang.org/blog/2020/12/08/introducing-orc.html


The first thing that comes to my mind is that there are different axes that you may need to scale against. Microservices are a common way to scale when you’re trying to increase the number of teams working on a project. Dividing across a service api allows different teams to use different technology and with different release schedules.


I don't necessarily disagree, but I believe that you have to be very careful about the boundaries between your services. In my experience, it's pretty difficult to separate an API into services arbitrarily before you've built a working system - at least for anything that has more than a trivial amount of complexity. If there's a good formula or rule of thumb for this problem, I'd like to know what it is.


I agree. From my perspective, microservices shouldn’t be a starting point. They should be something you carve out of a larger application as the need arises.



Adding to this, I see two other elements that make this tough to replicate in Kotlin:

1. Type erasure

2. Using sealed classes requires instantiation, while the Rust version is zero overhead.


> 1. Type erasure

There is no need to access type information at runtime here.


Is there anything you would recommend instead? Or any resources you have on hand to get more information?


You mean of the critique of Maria Montessoris education in general?

Then i'd refer you the 1914s Text from Kilpatrick (From the same time than Montessori, a progressive Educator himself) https://archive.org/details/montessorisystem00kilprich

I just chuckeled as i re-read his accounts of her arithmetics material:

"On the whole, the arithmetic work seemed good, but not remarkable; probably not equal to the better work done in this country. In particular there is very slight effort to connect arithmetic with the immediate life of the child. Certainly, in the teaching of this subject, there is for us no funda- mental suggestion. "

Mind you, that's from 1914. Pedagogics have improved since then...

Or do you mean a better understanding how pre-numeric mathematic works? Most efforts are built around Piagets Teachings (Not very scientific as he "just" observed his own three children) https://en.wikipedia.org/wiki/Piaget%27s_theory_of_cognitive...

One sample of such a adaptations for pre-numerics (not Geometry) are the works of Carin de Vries (sadly only in German): http://oops.uni-oldenburg.de/1014/1/vridia10.pdf


I call BS.

Kilpatrick was a disciple of Dewey; both were known haters of Maria Montessori and her work, and never gave any specific criticism of her system, just appeals to their own authority; they basically rejected her purpose in education (developing the faculty of reason, learning writing, reading, arithmetics, and geometry) because of their progressive-education, anti-reason dogma.

There isn't much difference between the insights of Montessori and Piaget. Piaget worked with Montessori early in his career. The experimental nursery school in Geneva, La Maison des Petits, where Piaget carried out his first studies of children in the 1920s, was a modified Montessori institution, and Piaget was the head of the Swiss Montessori Society for many years.

Jean Piaget was a great scientist who conducted systematic experiments with countless children. Your claim that he only observed his own 3 children is a lie. Scientists at the University of Geneva to this day carry on with his research and experiments.

That last reference you provide is some obscure German PhD thesis which does not reference Montessori's work at all; it has a major focus on teaching mentally retarded children, as well as teaching mathematics out of books to children older than the ones that go to Montessori schools (commonly age 3-6).


Kilpatrick contra Montessori: I'd call the 70 page critique quite specific. Quote: "The Montessori child learns self-reliance by free choice in relative isolation from the directress. He learns in an individualistic fashion to respect the rights of his neighbors. The kindergarten child learns conformity to social standards mainly through social pressure focused and brought to bear in a kindly spirit by the kindergartner." Clearly the writings of a hater ;)

  > There isn't much difference between the insights of Montessori and Piaget.
How a child learns, what big influence self interest and self control are is quite similar between the both. I personally also agree. The immense impact and usefulness of Piagets stages of cognitive development are orginally his and not to be found in Montessoris Teachings.

  > Piaget was a great scientist who conducted systematic experiments with countless children. Your claim that he only observed his own 3 children is a lie.
Well, it's a hyperbole based on Ginsburg & Opper, 2004. I included the hyperbole b/c i thought it might come up after i brought up Piagets name. His well-respected and often built upon work is often criticized because of lack of initial sample size, reliance on language as critical examination tool and some stages don't develop in all and/or most children like he predicted.

His work was still a giant leap.

  > That last reference you provide is some obscure German PhD thesis which does not reference Montessori's work at all; it has a major focus on teaching mentally retarded children, as well as teaching mathematics out of books to children older than the ones that go to Montessori schools (starting at age 3).
The Author of this thesis is one of the most respected scientists in germany relating to pre-numeric Mathematics. Teachers for mentally retarded children had to think about pre-numeric mathematics long before it became fashionable for younger kids. She based her work on Piaget and Vygotsky. Its quite my point that she didn't base it on Montessori.


More nonsense.

Evidence contra your second claim, that Piaget's "stages of cognitive development orginally his and not to be found in Montessoris Teachings": Montessori's theory of the sensitive stages [1] and the planes of development [2].

I'm not going to continue to argue with someone who willfully spreads lies (malicious "hyperbole") and poppycock.

[1] http://rmschool.org/content/sensitive-periods

[2] https://ami-global.org/montessori/quotes/four-planes-develop...


You've crossed into incivility in this thread and broken the HN guidelines by calling names. That's not ok, regardless of how wrong someone else is. Indeed, assuming your position is correct, it's important not to discredit it by commenting like this.

Would you mind reading the site guidelines and following them scrupulously when commenting here? We're trying for a better outcome than scorched earth followed by heat death, which seems to be the default for internet forums.

https://news.ycombinator.com/newsguidelines.html


Ok.

My father worked with Piaget, so I lack tolerance for someone who deliberately spreads lies about Piaget's work.


I can certainly understand that. Still, patient correction is probably a better way to honour him.


Thank you, that's a helpful perspective.


I said: " The immense impact and usefulness of Piagets stages of cognitive development are orginally his and not to be found in Montessoris Teachings"

Montessori via your source on mathematics: "Mathematics: Formation of the concepts of quantity and operations (addition, subtraction, multiplication, and division) from the uses of concrete learning materials. (birth to 6)" Quite general.

Piaget has a few sub-skills concerning Quantities:

  * conservation (xxx has the same amount of x'es as x x x)
  * classification (x is a letter and lowercase)
  * seriation (x X y Y could be orderes XY; xy, but also Xx, Yy)
  * transitivity and others
This insights are very useful if you want to make - for an example - a pre-numeric mathematical game concerning quantities.

Montessori does not provide such deep insights into pre-numeric mathematics.


Rubbish.

You pretend to "infer", from a one-sentence description of how the Montessori method uses concrete materials to teach the basic arithmetic operations, that somehow other mathematical principles are absent from the Montessori pedagogy.

Of course these principles are demonstrated and taught in Montessori schools, also with concrete learning materials.

You may not know much about Montessori, or you may have an axe to grind, or ...? Either way, you should stop making demonstrably false statements about her pedagogy (and about Piaget).


Well, could you reference something more scientific and recent? You just dissed Montessori's saying it's quite poor, and some backing to that would be great.

I would agree of course, 100 years over it's bound to be primitive to what we know now. Since you say in a previous comment you are a member of the German Education and Science Union I was hoping you could point us the right way


This is called a VLQ, btw. Variable Length Qunatity: https://en.m.wikipedia.org/wiki/Variable-length_quantity


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: