Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I hear a lot that WKWebView is an improvement, but all I see are some pretty significant limitations compared to WebView:

- no screenshots

- no printing

- no user stylesheets

- no cmd-F find

- no delegate method for hover-over-link

- no access to container NSScrollView (on macOS)

- harder to customize context menu (AFAICT)

- ignores URLProtocols (10.13 added a new way to implement custom protocols for WKWebViews with a completely different interface than NSURLProtocol/WebView)

I'm dreading the day when Apple is going to require me to 'upgrade' my WebView to WKWebView.

All I want to do is supply a byte stream containing HTML, and have it displayed in an NSView, with standard features. This doesn't seem like an obscure case, or a difficult problem. I don't understand why none of Apple's web views ever had simply a load(stream:NSInputStream) method. This was hard with WebView, and WKWebView made it a lot harder.



Yes, it's a mixed bag with some things better and others worse. But having a great JavaScript engine with the JIT enabled is pretty important for many applications.

But breaking up the browser into different processes and communicating via messages and sharing textures in GPU memory between processes (IOSurface, GL_TEXTURE_EXTERNAL_OES, etc) is the inextricable direction of progress, what all the browsers are doing now, and why for example Firefox had to make so many old single-process XP-COM xulrunner plug-ins obsolete.

IOSurface:

https://developer.apple.com/documentation/iosurface?language...

https://shapeof.com/archives/2017/12/moving_to_metal_episode...

GL_TEXTURE_EXTERNAL_OES:

https://developer.android.com/reference/android/graphics/Sur...

http://www.felixjones.co.uk/neo%20website/Android_View/


So I get a better JavaScript engine, at the cost of making 10 features I need worse. I don't even want a JavaScript engine. I just want to display some HTML.

I'm sure for some use cases it's "progress" but I'm really having trouble seeing it right now. It would be an improvement if I were writing a web browser but I don't understand why it had to make the simple case worse.


Chrome and Firefox with WebRender are going the opposite direction and just putting all their rendering in the chrome process/"GPU process" to begin with.


Yes I know, that's exactly what I meant by "breaking up the browser into different processes". They used to all be in the same process. Now they're in different processes, and communicate via messages and shared GPU memory using platform specific APIs like IOSurface. So it's no longer possible to write an XP/COM plugin for the browser in C++, and call it from the renderer, because it's running in a different process, so you have to send messages and use shared memory instead. But then if the renderer crashes, the entire browser doesn't crash.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: