Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The day they provide this, is the day they are clearly signalling the end of macos.


I would argue that a phone, tablet, and computer can all have the same OS, just with different UI.

So the "end of macos" is unlikely in my book.


I'm waiting for the day we can "dock" a phone to a monitor with a mouse/kb & have a full desktop OS experience.


Samsung Dex is this. I can dock my phone and get a desktop experience. I have Visual Studio Code running now (though code-server[1]) and Ubuntu userland (via Termux/Andronix). Plus all other Android apps running in detached windows.

It's unfortunately not even as useful as it sounds but it is a start. I'm still at the early stages of setting it up for real work.

[2] https://github.com/cdr/code-server


How responsive is it? Would you mind being limited to your Dex without another desktop or laptop to use?


It's definitely responsive enough; performance is not the problem. It is software availability that the biggest problem -- Android doesn't even really have a decent desktop-class text editor. But email, web browsing, games, etc is all fine; I can multitask all these things in a Dex desktop.


I used DeX (the non-Linux version) while my main laptop was in for repairs, and it got me through. There were even positive surprises, like discovering it supported external hard drives and some of my pro audio gear. I could connect my Focusrite Scarlett USB audio interface which is connected to my JBL 305P studio speakers.

If you're able to do all your work on Android or the web, it would work. But for me there's still a ton of Windows and Mac software I need to get the job done. The other problem is DeX doesn't really work in a laptop form yet, and I like to do work from cafes a lot.


The Samsung Tab S6 with the keyboard cover would give you DeX in a laptop form factor.


Better not update it to Android 10 though, as they canceled the project.


They cancelled Linux on Dex but that's not what I'm using. Dex works fine on Android 10.


Current iPhone has the power to do so, and can connect via ApplePlay and bluetooth. So the only holdup is software.

Maybe this year's WWDC will show us something closer to that.

I'm holding out hope for this, since they "added" mouse support to iPadOS in 2019. There has to be a reason for that plus Universal apps, plus SwiftUI, right?


1. You mean AirPlay.

2. You don't need AirPlay, you could drive a monitor and keyboard/mouse directly, using an HDMI/USB dongle. But yes, as you said, software is the hold up. The rest is ready.


They might also mean CarPlay, which is closer in idea to “plug cord in get desktop UI”


Yup, we need "DesktopPlay", "TVPlay" (think nintendo switch) and "CameraPlay" so you can sync photos via AirDrop on your SLR.


Maybe they plan to support it for iPadOS devices and either no support or stripped down for iPhone. It’d be cool if we at one point had desktop power in our pockets though. (Yes I know we do to some extent, but it’s still a bit of a stretch for a daily driver.)


What is the benefit?

Most people who need a desktop OS, want a desktop or notebook that performs faster than a smartphone. Otherwise they can just get an iPad or low end laptop and it won't set you back much more than keyboard + mouse + display. Cloud sync takes care of the rest.


Smartphones are probably pretty close to being powerful enough for 80% of desktop use cases. Having one device that can dock to a "laptop" shell and give me a full screen desktop OS would be amazing.


current iPhones and iPads are quite powerful

https://www.macrumors.com/2018/11/01/2018-ipad-pro-benchmark...


yes! Microsoft and Android products have sort of done this in the past, but didn't do well in the market or the user/dev experience was not great.


Like Windows Phone Lumia, or many Windows tablets.


I want to see pen-on-tablet experience that allows working without a keyboard.

Then, plug tablet to monitor and “type/write” on the tablet.


Why? Writing is much slower than typing and less accurate to boot. There are also a lot of problems to solve for a variant input which have long been settled with keyboard. For example code completion, jumping to the definition, showing errors where they keyboard focus is, etc.

Unless someone were to come out with some kind of massive productivity gain by switching input types, people aren't going to switch and these supplemental technologies will never get built.


I often typo and get corrections from the text editor I used, when I write not-code text.

When I write code, I don’t need to type fast. However, I like to layout things quickly: proper indentation, docstrings, spacing, order/position of functions.

I think those can be solved.


Ever sketched a quick diagram in a notebook to try and work out a problem? Handwriting isn’t dead.


Writing + voice input + Soli gestures + Selfie type would go a long way.

I could live without a keyboard.


Like what Ubuntu tried to do


The problem is that the "just" is an unsolvable problem.

Your phone and to some extent tablet are limited screen estate with the finger as the primary method of interaction.

Your desktop is nearly unlimited screen estate with high precision input methods (mouse and keyboard).

The "just different UIs" mean "entirely different UIs tailored to specific interactions and completely changing behaviour of an app in all but the simplest cases"


All of this is true, but with processing power and disk space increasing, you could envision a world where a phone simply has a copy of a desktop OS that it could boot into for docking, like a Mac with a Windows partition.

I’m sure you could do this with Linux now, it’s just such a niche use case nobody has mass produced it.


Exactly: it's such a niche thing, no one is mass-producing it. Because you would still have a problem.

Let's say, you started editing a document on your phone. You then put it in a dock. Now what? The separate desktop OS starts, and?...


>Let's say, you started editing a document on your phone. You then put it in a dock. Now what? The separate desktop OS starts, and?...

And you edit the document in a desktop UI now.

Apple already has this feature (moving data and open files and apps and such from mobile to desktop transparently), called Continuity.


Yes. And it's different apps (one on desktop, on on mobile), and both apps have to support Continuity.

So a phone in dock would have to run two OSes in parallel, and the apps in both OSes would have to run and support continuity-like hand-off.


It could sync from the cloud.


Yup, this will work fast enough work for most apps.


If MSFT can make Excel run on my iPhone, then I have a lot of hope.

When I can edit a freaking PowerPoint presentation on an iPhone (in a pinch - I wouldn't want to do it all day), then I have to think it's not only possible, but already done.

Email is probably the best example. Every major email app is on multiple platforms, and I'd be very surprised if most of the code is not already the same among platforms except for the UI.

So I'd submit that it is a solved problem.


I believe you misunderstood. The issue is not whether software can be ported to a touch platform. It is that keyboard/mouse-oriented UI and ergonomics and a touch-oriented UI and ergonomics cannot successfully co-exist. It’s not a programming challenge.


> "keyboard/mouse-oriented UI and ergonomics and a touch-oriented UI and ergonomics cannot successfully co-exist"

I'm not sure that's necessarily true. Are you aware of Mac Catalyst?

https://developer.apple.com/mac-catalyst/

(Of course, the user experience will always be somewhat different on a small phone screen compared to a big Mac screen. But that doesn't mean you can't build an app that works well on both from one code base. And certainly an iPad app is not all that different from a Mac app.).


Again, you are thinking of porting a touch app to a non-touch device when the issue is that good touch and non-touch UI and ergonomics cannot co-exist on the same device.

And I am familiar with Catalyst. I think Catalyst is a good example of how software suffers in a K/M-oriented environment when it came from a touch environment without many modifications. Even the Apple-developed apps have too many controls and views designed for tactile screens. The Catalyst development team is introducing UI elements that make more sense in macOS, like a compact calendar picker to replace the touch-style picker wheel, but it’s going to take a long time before a Catalyst app lets a developer quickly make a macOS version that feels designed for macOS, and again, there is a need for that native macOS feel because quality touch UI interfaces are slow and difficult to use on a laptop or desktop.


Well, let’s use our imagination...

... and hope that Apple reinvents user input on mobile.

A little Soli for advanced gestures:

https://atap.google.com/soli/

A selfie keyboard?

https://www.cnet.com/news/samsungs-selfietype-creates-a-magi...

Better voice commands. For example, why can’t I say “build app”

“Rename function foo to bar“


Voice certainly seems like a viable solution to some of that. In some ways it’s imminent if the accessibility improvements pushed to iOS, iPadOS and macOS are advanced just a bit further.

However, we should also expect to see PC use pushed into more advanced and specialized territory on our most powerful and versatile devices as phones and tablets assume more traditional PC work. That specialized use will advance as fast as the most attuned interface for that platform (keyboard+mouse) and the others will necessarily lag behind.


All of this won't solve the mobile/desktop dichotomy.

I rename things dozens of times a day. Saying "rename function A to B" dozens of times a day is unviable on desktop and is nearly unusable on the phone. And this is a fundamentally different UI.


>Again, you are thinking of porting a touch app to a non-touch device when the issue is that good touch and non-touch UI and ergonomics cannot co-exist on the same device.

Sure they can, just not on the same screen sizes/input methods.

You could have Excel that looks like iOS Excel when opened in the iPhone that automatically turns into Excel that looks like macOS Excel when the iPhone is connected to a larger screen with a mouse and everything.


You couldn't. Not without exceptionally complex UI code amounting to writing two different apps inside of one.


Since you have both of those apps already, you can trivially combine them.

In the process you'll also get to reuse large parts of both UI code, and almost all of the non-UI code.

And if you've started from scratch, it would be even easier to find ways to reuse more UI code -- e.g. components could come in "auto-dual" versions that adapt.


> can trivially combine them

Could you explain to me how can you trivially combine two apps?

> In the process you'll also get to reuse large parts of both UI code, and almost all of the non-UI code.

Yes to the non-UI code. Hard no to the UI code.

You will need to design a completely different set of interactions, components, layouts etc. for the mobile version compared to the desktop version.


>Could you explain to me how can you trivially combine two apps?

You already have the UI code for mobile and desktop. All you need to do is switch to one or the other when the user connects/disconnects an external monitor.

At the most basic, you could just save the spreadsheet state (staying with the Excel used as an example), and load in the background and switch to show the desktop version of the app with it pre-loaded. Same way as if the user manually saved their spreadsheet, closed the mobile version of the app, and opened the same spreadsheet with the desktop version - but more transparently.

Between this and "sharing UI" there is a big spectrum. If you already have the mobile and desktop version, and the backend is more or less the same (as can be the case with apps like Excel for macOS and iOS), then compared to the work you've already done its trivial to add an intelligent way to switch from one UI to the other keeping all the other state (working spreadsheet, clipboard, current executing action, etc).

>You will need to design a completely different set of interactions, components, layouts etc. for the mobile version compared to the desktop version.

Not necessarily. A sphreadsheet cell is a sphreadsheet cell. Whether you click on it with touch or the mouse pointer doesn't matter. You could easily share the same underlying widget (and e.g. just show more of them). The formula editor that appears can similarly be shared. Other forms might need some extra padding, or some widgets to become larger or smaller etc.

We already have apps that run the same in iOS and macOS, through Apple's translation layer + layout constraints and switches on widgets. The "Voice Memos" apps is basically the exact same thing between iOS and Mac.


> You already have the UI code for mobile and desktop. All you need to do is switch to one or the other

There's no "just switch". I wish people stopped hand-waving at complex technical problems with "just"s and "all you need"s.

What you're saying is: "you have two completely different UIs with completely different modes of interactions, completely different layouts, affordances, a myriad other things. 'All you have to do' is ship them together and switch them on the fly".

> then compared to the work you've already done its trivial to add an intelligent way to switch from one UI to the other

It is not "trivial"

> A sphreadsheet cell is a sphreadsheet cell. Whether you click on it with touch or the mouse pointer doesn't matter.

It does matter. Because interactions are completely different. Just for the most trivial example: once you've selected a cell, you can immediately start typing you can immediately start typing when you're on a desktop. On a mobile device you have to do an additional tap (double tap in Excel) or tap a different area (entry box in Google Sheets) on the screen to start typing. And that's just one interaction. There are hundreds of other interactions which will be different.

> We already have apps that run the same in iOS and macOS, through Apple's translation layer + layout constraints and switches on widgets.

Yes, and almost all of them fail in the most basic ways on the desktop: they provide incorrect widgets (dates for example), they break user input, they handle focus incorrectly, they don't have keyboard shortcuts, they use interaction patterns that are alien to the desktop and so on and so forth.

Let's take a look at "voice memos":

- No shortcut to delete a Voice Memo, but a slide-to-reveal Delete button. Alien to desktop

- Esc doesn't work to exit editing screen or recording screens

- Cmd+W quits the app which is against the HIG

- Once search input has focus, you can't Tab out of it (but you can Shift-Tab)

- In the editing screen the Crop Button is inside window chrome which is against HIG if I'm not mistaken.

Yes, this app runs "the same on iOS and MacOS", and that's precisely the problem: it shouldn't run "the same". It must be different because the desktop is different.

And note: this is a first-party app with next to zero functionality: a few buttons, a screen that shows one thing at a time. That's it. And it already is filled with inconsistencies and bad behaviour on the desktop. It will only be much, much worse for any app with more complex functionality (unless developers take conscious and specific steps to address this).

Also see, "Catalyst and Cohesion" [1]

[1] https://wormsandviruses.com/2019/12/catalyst-and-cohesion/


>There's no "just switch". I wish people stopped hand-waving at complex technical problems with "just"s and "all you need"s.

Well, and I wish you've read my whole comment before the BS about hand-waving. I explicitly describe what I mean.

>What you're saying is: "you have two completely different UIs with completely different modes of interactions, completely different layouts, affordances, a myriad other things. 'All you have to do' is ship them together and switch them on the fly".

Yes. Nothing particularly special about it. You could do just that: it's technically feasible (trivial even), and it would still be an adequate experience.

>It is not "trivial"

Well, agree to disagree. I've done it for apps and it's nothing much. What would be trivial for you, just flipping a compiler flag or changing 10 lines of code? Well, you ain't gonna get that.

>It does matter. Because interactions are completely different. Just for the most trivial example: once you've selected a cell, you can immediately start typing you can immediately start typing when you're on a desktop. On a mobile device you have to do an additional tap (double tap in Excel) or tap a different area (entry box in Google Sheets) on the screen to start typing.

That's a bogus difference. If the mobile device is connected to an external BT keyboard, you can already "just start typing".

Even if that wasn't the case, 99% of the widget is the same. The fact that to get the virtual keyboard to show up cell focus on mobile is not enough, is a negligible difference (not to mention it will probably not even touch the cell widget code, but be in another part of the UI dispatch, even handle directly by the framework).

>Yes, and almost all of them fail in the most basic ways on the desktop

And they are still perfectly operable, and people (including me) use them every day. So there's that.


Please explain how email fits your mode but fails mine.

Sorry, but I’m missing something in you comment.


Basically, it’s really hard to actually develop totally separate UIs in the same app, e.g. another user brought up the Catalyst project, which is bringing poor-fit touch paradigms into macOS despite the developers’ intentions in many cases. One paradigm will be dominant. Care must also be taken to not load too many unused resources.

Interactive workflows also differ between UIs.

Since so much of an OS is the native UI and the first-party applications, even if the mobile, tablet and PC versions of an OS share some libraries, they can’t really share enough to meaningfully call them one OS without compromising the experience on all three.

So while there may be three pane email on the iPad as well as Mac, and email on iOS, they don’t really share enough to be called the same app, and if they did, at least one of them would suffer. And some of the interactions on the macOS version effectively can’t be brought over.


There’s still an unsolved problem related to desktop publishing and that’s writing a document with a citations from an EndNote or Zotero database.

I included EndNote only because there’s deep integration in Pages.


Microsoft seem to be heavily invested in react native (and possibly election?) as their UI layer.


You're confusing the technology to create UI with the UI itself.


Why?

iOS is still much lighter weight than MacOS, uses less memory, doesn’t have swap, optimized for battery use, and optimized more for security than flexibility.


And yet you can edit video on it.

It’s powerful enough. The UI isn’t that big of an issue.


Yes. But what happens when you start adding swap, unlimited background processes, etc.?

All of the advantages of an iPad go away - you get Windows 2-1.


Performance vs. battery life toggle could be implemented. A lot of this stuff has already been implemented in the jailbreak community over the years.


iOS has had that for years - low power mode.


Low power mode is only available on the iPhone.


Why ‘unlimited background processes’?


Modern desktop operating systems don’t limit the number of processes actually running. iOS limits the type of apps that can run in the background and will kill a process that uses to much cpu or RAM.

iOS is optimized to consume as little power and memory as possible.


>Modern desktop operating systems don’t limit the number of processes actually running.

Yes. But no reason we need "unlimited processes" to use iOS for development and other stuff.

A few processes with a hard limit would be doable...


For me personally to do any type of development, I either need a constant network connection or the ability to run my stack locally including databases and Redis.

I also need to be able to launch a web browser or Postman to debug interactively. I personally hate developing on a laptop with no external monitors (preferably two). I would definitely hate trying to do that with iOS’s simplistic multi app/multi window support.

Also, while the Files app is okay for one off documents and sharing between apps. How would that work in a development scenario?

You would also need to allow apps to communicate with each other over TCP/IP locally.

Now you’re back to a multi window GUI (making iOS more complex) and apps having random access to the file system (less secure).

If you want an iPad to behave like a laptop - why not just buy a laptop? Alternatively, if you want a laptop with the power of MacOS and the power/performance capabilities of ARM, wouldn’t it make more sense for Apple to port MacOS and create ARM laptops?

The iPad is so light, I have no trouble throwing one in my laptop bag along with my laptop and syncing files between apps on both using cloud storage.


Actually that is the trend for modern versions of Windows and macOS.


Modern versions of MacOS and Windows don’t try to get rid of swap nor do they arbitrarily kill/block background processes.


Actually they do arbitrarily kill background processes That opt-in to it.

It has to be opt-in because otherwise legacy processes would break, but it is definitely present.


If you opt in for it, can it really be called “arbitrary”?


Yes - the killing is done arbitrarily without warning. Just like on iOS. It is the recommended behavior.


Better have a look at what is in the box for Windows 10X and post-Catalina roadmaps with the increasing app sandoxing.


Well, unless you have inside knowledge about Apple’s roadmap, sandboxing is only required for the Mac App Store.

Or are you believing the same 10+ year old conspiracy theory that Apple plans to make it mandatory for apps to be installed from the Mac App Store?

Also Windows 10X is just another failed gimped version of Windows that is suppose to make Windows run better on tablets and low power devices.


I happen to have good guesses reading between the lines, and it quite obvious where required notarization, user space drivers, application entitlements and iOSification of macOS are heading to.

MSIX is what is driving Windows 10X security, which coincidently is the future of Windows package management.


Application entitlements were required shortly after the Mac store launched - over 10 years ago and only for App Store apps. If Apple wants everything to be App Store only, they really are taking their sweet time.

Signed drivers have been a requirement for Windows forever. Apple is actually late to the game.

It’s also well understood that from a security and stability standpoint that moving drivers into user space was preferable.


Catalina has changed that, notarization is now required for everything, not only App Store.


Well, first there is a difference between “notarization” and “sandboxing”. Notarization just requires you to have your app signed, is a completely automated process, and in no way restricts what your app does.

Sandboxing restricts what your app can do and you have to use entitlements to use certain features.

But no, notarization is not “required” and as an end user you can ctrl-click the first time you run an app to bypass it.


Still, give it more 5 years or so.


They said the same thing back when it was announced in 2010....


They also said that Apple would never make notarization a requirement, then came Catalina.


They never said that and in fact it is still not a requirement. You can use the same control click to bypass it that you always could.


Doubtful, there's still a wide difference between MacOS and iPadOS, if anything they've diverged more in recent years. The multitasking workflow on iPadOS is flux, and has complexity issues.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: