I think it is limited to integral quadratic bezier curves, which is sufficient for text rendering. But general purpose vector graphics almost certainly want rational cubic bezier curves too.
There are two ways to get winding numbers and then decide on filled or empty by some rule like non-zero or even-odd:
a) The winding number of a point is the number of intersections of a scanline and a closed path.
b) The winding number around a point is the total angle subtended by the path at that point.
Slug uses approach a) and that comes with a lot of edge cases (see chart in the post) and numerical precision issues. The approach by loop & blinn uses b) and is thus simpler and more robust. Likewise the patent on that one expired too: https://news.ycombinator.com/item?id=47416736#47420450
Loop and blinn does not compute a winding number using the b) method. It avoids the issue of a winding number by assuming there's only 1 bezier curve per triangle, which requires a complicated triangulation step. It can produce some nasty geometry in more complex cases. With Slug, you can use only 1 quad per glyph if you want.
Also just to clarify regarding this statement:
> Slug uses approach a) and that comes with a lot of edge cases (see chart in the post) and numerical precision issues
Slug does not have numerical precision issues. It's the breakdown into different cases that _solves_ those issues, whereas your statement makes it sound like slug has _both_ the case complexity and the precision issues.
> It avoids the issue of a winding number by assuming there's only 1 bezier curve per triangle
The original paper did assume no overlap yes. But that is not how anybody would implement it. For a long time one would use the stencil buffer with different operations depending on the front-face / back-face (this is where the paths rotation around the sample comes in and what makes this an angle based approach).
> which requires a complicated triangulation step. It can produce some nasty geometry in more complex cases.
Again, not how anybody would implement this. You can just stream the quadratic bezier curves unprocessed into the vertex shader, literally the simplest thing conceivable.
> With Slug, you can use only 1 quad per glyph if you want.
Nowadays one would probably implement loop & blinn in a tiled compute shader too (instead of using stencil buffers) to reduce memory bandwidth and over draw. That way you also get one quad per glaph, but without any of the geometry special casing that Slug does.
> It's the breakdown into different cases that _solves_ those issues, whereas your statement makes it sound like slug has _both_ the case complexity and the precision issues.
Correct, might have worded that badly. Still remains a trade off in a) which b) does not have.
[1] and [2] sound similar to what you are describing. They still involve triangulating the shape, but the triangulation process seems much simpler than the loop and blinn paper. However, if you want to do distance based anti-aliasing rather than supersampling, things are going to get complicated again as you have to expand the shape outline to capture more pixel centers.
I don't see a straightforward way to apply this technique in a pixel shader that includes multiple curves per triangle. I feel like any attempt to do that will approach the complexity of Slug, but maybe it's my own shortcoming that I don't see it. I would love to read more detailed information on that if you have it.
> [1] and [2] sound similar to what you are describing. They still involve triangulating the shape, but the triangulation process seems much simpler
Yes, they describe one variation of the angle based method to winding numbers by spanning a triangle fan from an arbitrarily chosen pivot point / vertex.
> if you want to do distance based anti-aliasing rather than supersampling
Particularly when it comes to rendering vector graphics I think of analytic anti-aliasing methods as somewhat cursed and prefer multisampling [0], at least for magnification. For minification mip-mapping remains the go to solution. However, if you only render 2D text on a 2D plane, which is typically overlap free, then these correctness issues don't matter.
> I don't see a straightforward way to apply this technique in a pixel shader that includes multiple curves per triangle
All modern vector renderers I know of avoid triangle rasterization entirely. Like I said, they typically do tiles (screen space partitioned into quads) in a compute shader instead of using the fixed functionality with a fragment / pixel shader. The reason is that nowadays compute is cheap and memory bandwidth is the bottle neck. Thus, it makes sense to load a bunch of overlapping geometry from global memory into workgroup shared memory, render all of it down to pixels in workgroup shared memory, and then only write these pixels back to the framebuffer in global memory.
> I feel like any attempt to do that will approach the complexity of Slug
A highly optimized implementation might very well, yes. Yet, handling the many cases of intersections of the path and the scanline won't be contributing to the complexity, which is what started this discussion.
> I would love to read more detailed information on that if you have it.
I implemented the outdated stencil buffer + triangle fan + implicit curves approach [1] if you want to take a look under the hood. The library is quite complex because it also handles the notoriously hard rational cubic bezier curves analytically, which Slug does not even attempt and just approximates. But the integral quadratic bezier curves are very simple and that is what is comparable to the scope Slug covers. It is just a few lines of code for the vertex shader [2], the fragment shader [3] and the vertex buffer setup [4].
Edit: You can even spin loop & blinn into a scanline method / hybrid: They give you the side of the curve your pixel is on [5], which is typically also the thing scanline methods are interested in. They compute the exact intersection location relative to the pixel, only to throw away most of the information and only keep the sign (side the pixel is on). So, that might be the easiest fragment shader vector renderer possible. Put it together in a shader toy [6] a while back.
Hey, huge thanks for linking that shadertoy example! It made it click for me how you can apply loop and blinn without triangulating.
I'm going to dig into it further, but if I understood at a glance, the triangles are there conceptually, but not as triangles the graphics API sees. You compute your own barycentric coordinates in the pixel shader, which means you can loop over multiple triangles/curves within a single invocation of the shader. Sorry if that should've been obvious, but it's the piece I was missing earlier.
I can now concede most of your original point. This seems like a simpler approach than Slug, if you're willing to supersample. Distance-based anti-aliasing remains an advantage of Slug in my view. I understand the limitations of AAA approaches when compared to supersampling, but it can be a wonderful tradeoff in many situations. If you can't afford many supersamples and the artifacts are rare, it's an easy choice.
But for me personally, I'm writing a 4x supersampled 3D software renderer. I like how the supersampling is simple code that kills two birds with one stone: it anti-aliases triangle edges and the textures mapped within those triangles. I want to add support for vector-graphic textures, so your approach from the shadertoy could fit in very nicely with my current project.
But just one final thought on Slug in case anyone actually makes it this deep in the thread: the paper illustrates 27 cases, but many of those are just illustrating how the edge cases can be handled identically to other cases. The implementation only needs to handle 8 cases, and the code can be simple and branchless because you just use an 8-entry lookup table provided in the paper. You only have to think about all those cases if you're interested in why the lookup table works. It's not as intimidating as it looks. Well, I haven't implemented it, but that's my understanding.
> All modern vector renderers I know of avoid triangle rasterization entirely.
Well, now you know of a modern renderer that does use triangle rasterization. The reason is simple -- Slug was designed to render text and vector graphics inside a 3D scene. It needs to be able to render with different states for things like blending mode and depth function without having to switch shaders. It also needs to be able to take advantage of hardware optimizations like hierarchical Z culling. And sometimes, you need to clip glyphs against some surface that the text has been applied to. Using the conventional vertex/pixel pipeline makes implementation easier because it works like most other objects in the scene. Having this overall design is one of many reasons why a huge swath of the games industry has licensed Slug.
You don't seem to have grokked the main feature that makes Slug interesting. The algorithm handles every single possible case uniformly through the use of a very fast classification and root eligibility determination technique. On some GPUs (including all NV from the last 10+ years), handling the full set of cases shown on the poster reduces to a single instruction (LOP3). The algorithm also eliminates all numerical precision issues -- provably -- making it the most robust of all time. There are no valid inputs for which the algorithm fails, so to say Loop-Blinn is somehow more robust is incorrect.
> I Think that DoS needs to stop being considered a vulnerability
Strongly disagree. While it might not matter much in some / even many domains, it absolutely can be mission critical. Examples are: Guidance and control systems in vehicles and airplanes, industrial processes which need to run uninterrupted, critical infrastructure and medicine / health care.
These redos vulnerabilities always come down to "requires a user input of unbounded length to be passed to a vulnerable regex in JavaScript ". If someone is building a hard real time air plane guidance system they are already not doing this.
I can produce a web server that prints hello world and if you send it enough traffic it will crash. If can put user input into a regex and the response time might go up by 1ms and noone will say its suddenly a valid cve.
Then someone will demonstrate that with a 1mb input string it takes 4ms to respond and claim they've learnt a cve for it. I disagree. If you simply use Web pack youve probably seen a dozen of these where the vulnerable input was inside the Web pack.config.json file. The whole category should go in the bin.
These are functional safety problems, not security vulnerabilities.
For a product that requires functional safety, CVEs are almost entirely a marketing tool and irrelevant to the technology. Go ahead and classify them as CVEs, it means the sales people can schmooze with their customer purchasing department folks more but it's not going to affect making your airplane fly or you car drive or your cancer treatment treat any more safely.
I think this is just sort of the wrong framing. Yes, a plane having a DoS is a critical failure. But it's critical at the level where you're considering broader scopes than just the impact of a local bug. I don't think this framing makes any sense for the CVE system. If you're building a plane, who cares about DoS being a CVE? You're way past CVEs. When you're in "DoS is a security/ major boundary" then you're already at the point where CVSS etc are totally irrelevant.
CVEs are helpful for describing the local property of a vulnerability. DOS just isn't interesting in that regard because it's only a security property if you have a very specific threat model, and your threat model isn't that localized (because it's your threat model). That's totally different from RCE, which is virtually always a security property regardless of threat model (unless your system is, say, "aws lambda" where that's the whole point). It's just a total reversal.
If availability is a security concern, than yes DoS is a security concern, but only in so far as all other bugs that limit availability are too. It is not a security concern per se, regardless of whether availability is a security concern. We don't treat every bug as a security issue.
The linux kernel does the opposite, they do not believe in security vulnerabilities. That's why if you mention "security" in a patch, Linus will reject it.
I just hate being flagged for rubbish in Vanta that is going to cause us the most minor possible issue with our clients because there’s a slight risk they might not be able to access the site for a couple of hours.
> The thing is that I myself don't even know what I want to do with it.
Embrace the next challenge: Instead of roads on parabolic (Euclidean) geometry, have roads on elliptic (non-Euclidean) geometry, like the surface of a sphere. Plus, on a sphere every line is already a circular arc anyway (no matter if straight or bent, the difference is just the center, radius and normals). Thus, this system of circular arc segments really lends itself to such a space.
Little prince style micro planets with their own miniature infrastructure will always have a special place in my heart. Half a year ago I started with laying out the basics https://github.com/Lichtso/bevy_ellipsoid_billboardhttps://github.com/Lichtso/bevy_geodesic_grid but got distracted by fixing some engine bugs in Bevy along the way. That reminds me I have to update to the newest engine version ...
anyway you can find some of the roads on spheres stuff here: https://github.com/Lichtso/bevy_geodesic_grid/blob/main/src/... it can not only generate the extrusion mesh but also calculate how the mesh overlaps with a geodesic grid of triangular tiles on the surface.
Go full science fiction and enable vertical or even upside-down roads for a 3D experience. :-)
Imagine an environment where ground/walls/ceilings always have gravity and one can build literal city mazes in horizontal and vertical directions. All that traffic going everywhere, oh my..
Really depends: In some areas it is quite advanced (rendering) and in others it is lacking / underdeveloped (editors / tooling). But there is an incredible amount of progress and also churn in keeping up with that.
Another point in case: Life only exists in liquids, not in solids (too much structure) and not in gases (too much chaos).
In fact one could argue that this is a definition of an interesting system: It has to strike a balance between being completely ordered (which is boring) and being completely random (which is also boring).
Thanks! Simon's example uses the custom voice model (creating a voice from instructions). But that comment led me eventually to this page, which shows how to use mlx-audio for custom voices:
> but [analytic anti-aliasing (aaa)] also has much better quality than what can be practically achieved with supersampling
What this statement is missing is that aaa coverage is immediately resolved, while msaa coverage is resolved later in a separate step with extra data being buffered in between. This is important because msaa is unbiased while aaa is biased towards too much coverage once two paths partially cover the same pixel. In other words aaa becomes incorrect once you draw overlapping or self-intersecting paths.
Think about drawing the same path over and over at the same place: aaa will become darker with every iteration, msaa is idempotent and will not change further after the first iteration.
Unfortunately, this is a little known fact even in the exquisite circles of 2D vector graphics people, often presenting aaa as the silver bullet, which it is not.
> I honestly cant't think of any good examples where game mechanics and stories interacted in a way that gave you significant agency while still being fun. I'd love to be given contra-examples though.
Rimworld and The Sims. Both are procedural story writers.
> I felt railroaded into comically absurd black/white choices
I agree: All these AAA titles essentially are movies where you get tons of "agency" in choices which are irrelevant to the story, but the main plot is hard scripted into a few predetermined paths.
Until we have full generative AI as game engine the only alternative remains the procedural approach mentioned in the beginning.