When I was in high school, I was part of an online community that wrote graphical shells for DOS (really NTVDM) in QBasic. One of the things we all started to want was a web browser, but most of us had no idea how to write a parser, let alone deal with the complexities of HTML and JavaScript.
So we created our own markup language, where each "tag" was on it's own line, with the first character indicating the type of tag. Only HTTP GET requests were supported, all output was in color CP437 text, and there was really limited layout support.
After a while there were maybe 5 or 6 different clients and maybe 10 different people had created websites, including a forum, a chat app, an RSS reader, a search engine, and some pages advertising various software people created. Sometimes it can be really fun to start "from scratch" with something simple.
Pretty much all the apps were written in PHP on shared hosting. Since the system had support for a simple kind of forms (using GET requests), it was possible to accept queries for the various web apps. The search engine crawled the pages that existed looking for links from a few "start" pages, and built a primitive index.
Since none of us knew how to write a parser, dealing with the arbitrarily nested nature of XHTML was pretty intimidating (now that I know how to do it it's not too bad, but in 9th grade I made very little progress). To my knowledge there is no XML parsing library for QBasic; it fell out of favor for "serious work" about 10 years before XML was invented.
Can't speak for the original author but if I had to guess: Files on a world writable shared network volume, as this was a "classic" in school settings ;-)
No matter the decade, allow some pupils a shared network volume somewhere, they'll soon or later write a chat app with whatever programming language they can get their fingers on in that network, at least that was my experience. I did it, heard from other schools were the same happened.
For that we cheated - someone wrote a xommand-line Windows program called GetWeb (I think) that took in a URL, fetched it, and wrote the result to a particular filename. I believe most of the clients used that, but some (especially those written in FreeBASIC) may have done their own thing.
This is interesting as an example of something that can be done on NTVDM but generally not other DOS emulation solutions such as DOSBox. A DOS program running in NTVDM can have a Windows console application as a subprocess; other DOS emulation solutions such as DOSBox generally don't support that. So a QBASIC program running under NTVDM can use the SHELL statement to run a Windows console program.
Part of how this works is that NTVDM uses the Windows console to host DOS applications. By contrast, DOSBox shows them in a Windows GUI window, not a console window. In principle, DOSBox could display them in a console window too; that works fine for text mode applications, but it can't support graphics. But, actually, NTVDM can run graphical applications in a Windows console window – how? Well, the legacy Windows console subsystem actually supports two different kinds of console buffers – text and graphics – but only the former is documented. NTVDM creates a graphics buffer for graphical DOS applications using an undocumented CONSOLE_GRAPHICS_BUFFER flag to the CreateConsoleScreenBuffer API – http://blog.airesoft.co.uk/2012/10/things-ms-can-do-that-the... – regrettably, creating a console graphics buffer is broken on 64-bit Windows. Another method that would still work on 64-bit Windows is GetConsoleWindow to get HWND of the console Window and draw on it manually. But Microsoft is discouraging console APIs such as CreateConsoleScreenBuffer and GetConsoleWindow since they don't work with pseudoconsoles (Windows 10 equivalent to Unix ptys) – and it actually looks like in more recent Windows code the CONSOLE_GRAPHICS_BUFFER API flag is gone – see
https://github.com/microsoft/terminal/blob/fb597ed304ec6eef2... and
https://github.com/microsoft/terminal/issues/246
NTVDM has an interface called VDD you can use to write virtual device drivers. Basically a Windows DLL loaded into NTVDM.EXE and you can call the DLL from DOS by executing illegal instructions. Often a VDD would have an associated TSR which would provide a more normal interface to DOS programs (based on software interrupts not illegal instructions.)
Microsoft provides some, and third party vendors and open source provides others (the interface is documented in the Windows DDK, or at least it used to be). In particular, Microsoft had one called REDIR which included a bridge between the DOS and Windows NETBIOS APIs. Microsoft used to ship their own Netware client with old versions of Windows and that included another, VWIPXSPX, which similarly bridges the IPX/SPX API from DOS to Windows. Novel’s Netware client for Windows had an equivalent VDD called IPXVDD - https://support.novell.com/techcenter/articles/ana19970501.h...
And see http://netfoss.com/ for a VDD which connects the DOS FOSSIL API (de facto standard for DOS BBS to talk to multiport serial cards) to TCP/IP.
QBASIC doesn’t support any of these APIs, but it can call code written in assembler, and assembler can.
Another option would be to use a virtual serial port driver which makes a TCP/IP port appear as a serial port. QBASIC has built-in support for talking to serial ports. There are a bunch of these, for example https://pcmicro.com/netserial/ or
http://com0com.sourceforge.net/ - it can be done either as a Windows kernel driver (so all Windows apps can access it) or as a VDD (only usable by DOS programs running under NTVDM but a pure user mode solution). Actually, if one uses DOSBox instead of NTVDM it has a serial-to-telnet gateway built in.
So there are lots of ways they could have solved it. Still wondering what they actually did.
That would indeed have been pretty cool and I remember reading up on ideas like that. What we actually did (as I mentioned in a recent reply) was use a helper program written by someone who knew a language other than QBasic that fetched files given a URL on the command line. I don't think there was a client that ran in actual DOS.
I understand the desire to use trust-on-first-use to get rid of certificate authorities (some of the most dispicable and untrustworthy organizations around, and the largest security hole in TLS). However trust-on-first-use basically hands any long-term man-in-the-middle a complete victory. Authoritarian regimes are going to require ISPs to mitm every connection from first use, and hand over whatever data is desired to the secret police.
CAA, Key Pinning, certificate transparency all bring the risk of CA abuse way down without opening a huge new vector of ISP abuse.
Personally I've come to believe none of it matters and that encryption is a total waste of time, at least for the kind of stuff I would want to publish using Gemini (long-form text) where there's nothing interactive or user-specific in any request. Just the act of making a network connection gives away who you are, where you are, when you are online, and what you're reading: https://kieranhealy.org/blog/archives/2013/06/09/using-metad...
Restricting Gemini to just long-form text (one way consumption) is really reducing the possibilities Gemini can have, IMO. Though the whole minimalist attitude would probably preclude most of them.
Fair. I certainly think that type of simplistic Gemini has a place in the world.
But I think ">If you want more possibilities, use the web IMO" goes against the minimalist ethos of Gemini as well; I don't want something as bloated as the web, but perhaps I do want a comment section of my posts? I think that's not too bloated to be thrown in with the rest of the web. But it might very well be too bloated to throw into Gemini, which is reasonable.
Footnote: An email alongside the post + the author editing the post with meaningful and thoughtful contributions could perhaps be a substitute.
With Gemini, the philosophy of intentional austerity means that everyone is going to be a bit unhappy, and want some feature added to it. Its "incompleteness" is a necessary consequence of its design philosophy -- it calls on us to accept it "as-is" rather than try and extend the spec. A mailing list for comments, as you describe, would be more "gemini philosophy". This is what the author does:
I find it laudable you're openly admitting making users unhappy.
Of course, what's "added" and what's "simple" is a political value choice in an of itself.
Using a mailing list for comments sounds like "returning to the golden days when we didn't need to take non-techies people seriously but could tell them what to use their cognitive capacity for" i.e. to learn arcane UI that never even tried to really take into account all the learnings of Human-computer interaction about how learning and cognition works.
It really doesn't sound like there's been an actual critical discussion taking into account the needs of a substantial array of potential users. Instead the design seems like a reactive wish to return to olden days.
Which is okay of course, just how it's presentes seems skewed in a very particular way.
I really don't think it's okay. I've noticed this type of "tech reactionarism" a lot recently in open source as a form of wishful thinking. This author in particular seems to write about it quite a lot. But it's misleading and not being fair to the users, who deserve to know that what they're doing is a waste of time. Unless someone wants to pull out the old teletype and modem connected to a phone line then we can't go back to the olden days. And honestly, nobody wants to do that in 2021.
I fundamentally disagree here. There are ways that people waste their time on the web that are far more pernicious than "learning how to write gemtext markup"
The web is in a seriously messed up state, and in many ways, things are getting worse. I think it's far more dangerous to shrug our shoulders and just let things continue as they are than to work on projects that attempt to present an alternative.
I don't see how you're disagreeing, you seem to acknowledge that it is a waste of time. And I don't see how any one of these is more pernicious than another, there can be bad actors, cyberstalkers, scammers, criminals everywhere on the net including on gemini.
Edit: A point I'd like to make here is that Gemini is not an alternative web and never will be, according to the designer's own statements. I share your frustration with some of the modern aspects of the web but the web also does a lot of things right that we may take for granted, let's not be so cynical as to forget that. I'm guilty of it myself in the past but no longer, and part of it was because people misled me.
Yeah, depends on the direction from which you look at "okay" from.
Arguably there are worse things in society than techies making tech for themselves.
Ironically though ... , to me it seems that Gemini and Facebook, though culturally presented as opposites here, are really fruits of the same tree, just different gardens and different stages of growth.
I think that this restriction is good, though. For other uses, there are other file formats (Gemini protocol supports the use of other file formats too, e.g. you can use audio/ogg or audio/opus for audio, image/png or image/xpixmap for pictures, etc; a client might not support them and a user might not want them, so really you should use text if that is suitable (I am one who would rather have an explanation in text rather than a video), although you can still have links to other files too if needed) and other protocol (e.g. NNTP for slow communication (including if you want to allow comments for your articles), IRC for fast communication, Telnet/SSH for interactive applications, and other protocol should be made up for database access then you can use a SQLite extension to access them) (the text/gemini format can include links to other protocols too).
What I think is that a "insecure-gemini:" scheme should be added, which is the same except without TLS and 6x response is not allowed (if a client certificate is required, it must redirect to the protocol with TLS).
People, or businesses, that you know in real life could exchange certificate hashes in person.
This is one of those things that usually garners the response "normal people would never do that", but honestly I'm surprised that no-one has even tried.
Let's say that web browsers put a short hash of the certificate right in the URL bar. Amazon, for example, could print its hash on every shipping box. Banks could print their hashes on plaques in every branch. Newspapers on their, well, newspapers. Media organizations could occasionally add them to their TV logos and radio jingles. And so on. There's any number of out-of-band channels available.
For most sites, you actually wouldn't need to verify the hash anyway. You usually wouldn't care. But when you did care, I honestly think this isn't such a crazy idea.
Many years ago, I was put into a situation where I had to start using online banking. Being skeptical about security, I asked for something along the lines of a certificate hash that could be verified in person. They couldn't answer that. So I tried asking how I could verify that the certificate displayed by the web browser was correct. They couldn't answer that. In the end, I ended up trusting the padlock icon and being left with the impression that commercial security was mostly about the illusion of security. To this day I'm left with the impression that some sort of MITM attack would be possible through the creative abuse of certificate issuers and proxies since there is no direct means of verifying the certificate is authentic. And they won't take that final step since it shatters the illusion of security being simple.
> mostly about the illusion of security. To this day I'm left with the impression that some sort of MITM attack would be possible through the creative abuse of certificate issuers and proxies since there is no direct means of verifying the certificate is authentic
Why would a bank's customer support agent know about encryption? Usually IT is a siloed function in most banks.
In a sane world, the bank would have a flier for the teller to give to the GP with several ways to verify the bank's key.
We only live without this because the bank can just reverse transactions when there is a problem and the police will fall pretty heavily on anybody that exploits the weakness. And also, because there are plenty of easier to exploit ones.
I think it was more that a random bank teller could not be expected to explain the intricacies of certificate authorities and online encryption. The bank likely has a security whitepaper on their website which explains all this.
Nowadays, all certificates have to be submitted to Certificate Transparency Logs and must have attached proof to be considered valid. Also, there are CAA records to ensure that only specific CAs are able to issue certificates.
My first idea is that it would be a nightmare from a supply chain security perspective. Eventually someone from the Amazon box printing company will be bribed to print a different certificate hash on 0.5% of the printed boxes, for example.
.onion addresses are hashes, but a hash long enough to prevent a brute forcing is also too long for people to consistently recognize. Is facebookwkhpilnemxj7asaniu7vnjjbiltxjghye3mhbshg7kx5tfyd.onion the right URL for Facebook or a clever near-collision?
I feel like whatever scheme you use, if it is too simple there won't be enough unique keys to go around, but if it is complex enough it can be subtly changed in ways a human would find hard to notice.
Again, it's impossible, or should be impossible, to easily change a single character in a hash. The avalanche effect means that small changes in the input data will result in huge changes to the hash.
The input data in this case is the private key so if you had that available to change, you would not need to do anything more as you could already control the real .onion domain.
The attack on things like onion domains would be finding another domain that to a human looks similar enough to be mistaken for the real deal. You'd do that by brute forcing (same as what Facebook did to get the "facebook" prefix in their onion domain) and for a sucessfull attack the space you need to brute force is limited by what humans can distinguish, which is smaller than the whole hash space.
The limit is much more about how many bits a human is capable of distinguishing reliably at a glance, which I think is very likely below the brute-forcible level
It’s a good form of secondary support for authentication, and is a reasonable foothold to start expanding circles of trust (I.e. second hand trust.. or “trust what my trusted source trusts”).
Trust on first use combined with preloaded public key lists is the best of both worlds in my opinion. You may say that browsers made in authoritarian countries would sabotage the preload lists, but they could also bundle illegitimate CAs or break HTTPS in other ways anyway.
This. Some standards bodies (arguably) made a big deal about client certificates some time ago to reliably pin client identities for client->server connections (whether it worked is a different story), and I certainly think having functionality for the reverse (pinning server identities) should exist too.
Doesn't have to be Gemini even, but I think getting buy in from browser vendors after the removal of HPKP is going to be a problem...
Another option would be something similar to Go's sum database, but with something like a DHT instead of a hosted centralized service.
Or perhaps instead of a DHT, users could export all their certs to a file and combine/compare entries in a grassroots manner with each other and with devices on different networks.
SSH has a different service model; there aren’t masses of anonymous clients connecting to a typical SSH server, but rather deliberate 1:1 relationships, which is why the model works there. And still, high-security environments deploy certificate SSH, specifically to avoid this problem.
There is no government effort to MitM SSH. And if there was, it would immediately crumble because not a single person manually verifies their ssh connection.
"My disdain for web browsers is well documented1."
Ha, I am not the only one.
I do like links though, somewhere around 2.3pre2, with stunnel or haproxy for up-to-date TLS.
Someone submitted an "improved" version of linenoise the other day, which added UTF-8 and ANSI codes by default.
But I prefer to avoid Unicode in the console (cf. terminal, I do not use a graphics layer plus emulator).
This is an example of enforcing developer preferences on the user where the result is added complexity. Perhaps nowhere is this added complexity more evident (obvious) than in the "modern" web browser and websites.
Links does not try to enforce its authors preferences too much on the user; for example, it has compile time options to disable utf8, ipv6 and getaddrinfo. The user makes the choice, not the developer.
The fundamental problem with "web development" is that its extremely aggressive in enforcing the developers (or web development cargo cult) preferences on the user.
Anyone remember that period where many commercial sites were a hollow shell that loaded a Flash animation. Perhaps a precursor to todays "SWAs". Funny, the Flash idea didnt last long. It died out long before browsers stop supporting Flash. These "trends in web development" are tiring and annoying.
Gemini doesnt allow for that level of "creativity". The focus is shifted to the information, not the presentation. The user comes first. The protocol may refelect the authors preferences but the result is reduced complexity.
Could you elaborate (or share a configuration example) on how you use stunnel or haproxy to connect an old browser to encrypted websites? I usually use squid for that purpose, but I've been looking for alternatives.
stunnel configuration is useful for a limited number of sites, since one has to specify every remote ip address in the configuration file. haproxy configuration doesnt have that limitation; it can do dns lookups, use maps, etc.
for example, one can force all http and https to connect to a backend as https with a specific tls version. this is much simpler than eff's "https everywhere", IMHO.
the problem with tls is it keeps changing. as users, we have to bet that every application will stay up-to-date with the changes, and that the software authors wont make mistakes when adding/updating tls support. IME, this has been a losing bet. by just focusing on haproxy and stunnel i only have to worry about a couple of applications staying up-to-date with tls and implementing support correctly.
Thanks for elaborating. I'd be grateful for any resources or configuration examples that explain how to do this with haproxy in more detail. I couldn't really find anything, but then again, I'm not experienced with haproxy.
The example configurations I have seen around the web are not representative of what one can actually do with haproxy. Everything I know is from reading the source code and documentation.
That’s fair. Is that what links without Unicode support does? (Ignore all byte sequences it doesn’t recognize.) Also, I’d still love to know why you prefer stripping out non-ASCII characters — does this sentence become more readable to you with the em dash omitted?
It just simplfies things for me. If I can read text without Unicode, then I dont need it. Its one less variable I need to worry about. Maybe another way to look at it is cost-benefit analysis. I just dont get much beneft from Unicode in the console (I'm usually just reading text) whereas it almost always causes problems from time to time.
I can see a dash in 7-bit ASCII. I am not going to lose the meaning of a sentence by forgoing a few Unicode chaacters.
People here saying they'd rather use just text-only HTML, or HTML+CSS sans JS, don't understand that what attracts people to Gemini is the ease of implementation.
The main quality of Gemini is that it is easy to understand and implement thus allowing normal developers to create clients and servers from scratch with minimal fuss.
You can actually keep the whole spec in your head which is no longer something anyone can do with the Web stack. What goes on inside a WebView? There is no single person that can reimplement all of that alone anymore, it is too big, too complex. Meanwhile Gemini is so simple that people like me can craft a simple browser in a couple hours.
Gemini is a bit conflicted here. It has both privacy and simplicity as design goals. But privacy (materialized by encryption) is far from simple, so much that "rolling your own crypto" is seen as a cardinal sin, and for good reason. Crypto is an arm race, constantly under attack, constantly updated, and very easy to get wrong. The argument on the Gemini website is to use a library, which is indeed the most sensible thing to do, but if you allow code you don't really understand, one might as well use a webview and call it a day.
I agree, and to be honest I'd rather the TLS was optional because this would open Gemini to a lot of retro computers. Anyway, there is Mercury which is a simplified Gemini without encryption IIRC which solves that.
The important part is that you can treat the TLS as a black box and still hold the Gemini spec in your head. It is easy to implement by simply leveraging a TLS and a network lib. To implement a modern Web browser you need way more than that these days, which is why almost all browsers are chromium :-/ no company can justify the effort to develop new engines and clients anymore (my opinion).
If you want Gemini without TLS there is Gopher. LaGrange supports both Gemini and Gopher - and I believe there's at least one server out there that does serve both as well though I can't remember the name of it.
I don't quite get this... it sounds like the author mostly has issues with the HTML part of the web, but much of this seems to focus on the HTTP part.... why not keep http and just make a replacement for HTML? Writing an http server is very simple, too
And Gemini being its own stand-alone thing that is different from everything is a serious hindrance for people to try it.
I've read Drew's stuff for a while and even wanted to test out Gemini out of interest, but it is really annoying/frustrating to use in my experience. And that is even before you get to the fact that there is nearly nothing there and/or it is nearly impossible. to find.
According to some other blogs posted here (links are are elsewhere in this thread) the entire point of Gemini is that it's useless and annoying to use, so useless and annoying that it drives out any advertisers or businesses that would be potentially interested to use it.
I understand the designer's reasons to dislike online advertising but the train of logic here seems to be "internet businesses like to make things that are convenient, so let's make something which is intentionally not convenient at all" which in my opinion is a really convoluted and nonsensical way to reach the intended goal. This one is a swing and a miss for me. I hope the designer keeps trying and comes up with something that is actually useful eventually.
Internet business got trapped in advertising due to the whole classic phenomenon of "bad money chases out good money". I.e. a similar example lies in businesses funded by VC runways for several years; even though those businesses may not be "actually profitable", because they're getting fake profits (VC money) for several years, they're able to compete against "organically funded" businesses that are actually earning real profits and are naturally solvent. More importantly though - they're often able to compete so well that they drive those other companies out of business. It leads to a churn scenario where, since the only businesses left are VC-funded, everything dies/gets-acquired within a 3-5 year time window (except the FAANG giants doing the acquiring).
Advertising did a really similar job.
It completely destroyed most web businesses trying to get their customers to "actually pay" in some form or another, and ... basically the only survivors have been a few giant media conglomerates (with their own ad divisions), and the FAANG companies (actually selling the ads). Everybody else is sharecropping on google/facebook's proverbial plantation, and like all similar historical situations of rentiership, the vast, vast majority of them have gone out of business and been acquired by the proverbial landlord (this is why, you know, something like 2/3 of the newspapers in the USA have gone out of business in the last few decades). What's even worse with the newspapers is that in many cases they weren't even acquired; they were just shuttered completely.
(The only other major survivor has been people selling actual physical products over the web.)
--
There are thousands of articles about why local journalism going out of business (and not getting replaced by anything, typically) is bad, and I'll leave googling those as an exercise for the reader.
Your assessment isn't wrong, but the proposed solution here is to not support any business at all, which is just as bad. Anything that is plans to supplant this and doesn't immediately approach the problem of "how are we going to pay the journalists?" is going to fail.
People here won't like this, but the only realistic alternative to the advertising model where the vendor subsidizes the content, is a DRM model where the user pays to unlock the content.
I think useless and annoying are the wrong words here. It’s elegant in that efforts to extend it beyond its purpose are self defeating. It’s a Chinese finger trap for those who would attempt to embrace and extend for adtech and corp subversion.
That's not really true though, myself or anyone else could fork Gemini right now and add a lot of "unwanted" features to it. The only reason nobody will follow through on that is because everyone who cares about those features is already using the web and has no reason to add them to Gemini. In that sense Gemini has skipped past the "embrace and extend" and moved itself right to "extinguish", which actually seems to be its central design.
Not just that. Whoever uses Gemini already is likely trying to escape those unwanted features, so they have no incentive to use something that adds them in.
I'm all for making it as difficult as possible for the adtech people to get their claws into it, but, at least IMO, this is way over the line into making things worse for end users. Most sites shouldn't be multimedia extravaganza, but that doesn't mean there isn't a place for nicely designed websites.
Honestly, just the lacking of inline links makes this stuff more annoying than anything.
> it sounds like the author mostly has issues with the HTML part of the web
Nope. And he has links to his blogs about what he doesn't like.
--- start quote ---
I used wget to download all 1,217 of the W3C specifications which have been published at the time of writing, of which web browsers need to implement a substantial subset in order to provide a modern web experience. I ran a word count on all of these specifications. How complex would you guess the web is?
The total word count of the W3C specification catalogue is 114 million words at the time of writing. If you added the combined word counts of the C11, C++17, UEFI, USB 3.2, and POSIX specifications, all 8,754 published RFCs, and the combined word counts of everything on Wikipedia’s list of longest novels, you would be 12 million words short of the W3C specifications.
I conclude that it is impossible to build a new web browser. The complexity of the web is obscene. The creation of a new web browser would be comparable in effort to the Apollo program or the Manhattan project.
It is impossible to:
- Implement the web correctly
- Implement the web securely
- Implement the web at all
Starting a bespoke browser engine with the intention of competing with Google or Mozilla is a fool’s errand.
That's all concerning the content though. If you don't want HTML and Javascript and video streaming and EPUBs on the web, you don't have to use them. None of that concerns HTTP itself, which is developed by IETF.
You could fairly easily make your own web browser that uses HTTP and only reads (gem)text files and ignore the rest of the web.
Or just use Lynx.
Pushing your own standard seems to only add to the complexity of it all. Now if you want to make browser that can read all the text pages, you have implement HTTPS and Gemini.
You are again conflating HTTP and 'the web'. The stuff you are talking about for web browsers has nothing to do with the HTTP protocol. In fact, many many things use HTTP that are not part of `the web`. Most APIs these days are HTTP based, but do not exchange HTML or any web content.
It's not complicated, if you only care about implementing HTTP/1.0 and selected easy/high-value parts of HTTP/1.1. If you want to comprehensively implement an HTTP/1.1 client, it's much harder. Doubly so for HTTP/2.
This all seemed interesting at first, but the more I look at Gemini, the more I shrug and say, "Why not just build some text web sites without ads or trackers?" Literally millions of those exist, some dating back to the first eta of the web that every geek my age or older seems nostalgic for.
(Really, they're nostalgic for an era when almost everybody else they ran into on the internet shared the background of being a white, male American geek, very few people were online, and nobody cared about anything said online. Not anything technical.)
You don't want JavaScript, videos, images, styles, or fonts? Don't use them. Nobody is going to come and force your site to have them. On the other hand, your site will load so fast that it won't feel remotely retro. ;)
You can use a static builder (or slap one together that fits your needs) if you don't like hand-coding HTML. Though, really, basic HTML is dead simple for pages of text. If you want to go full 1994, set your robots.txt to block all search engines. Make it so people can only find your site by following links from the sites of the like-minded. Don't link out to anything that isn't a similar site. Boom, you've done everything Gemini does with off-the-shelf software, leaving you with more time to read and write.
(I've seen the counter-arguments from Gemini boosters, but none are very convincing. A Dogme 95-esque approach to the web accomplishes everything interesting about Gemini.)
> Really, they're nostalgic for an era when almost everybody else they ran into on the internet shared the background of being a white, male American geek
As someone who doesn’t fit into that pigeon hole, and whose friends mostly don’t either, it’s so disheartening to read essentialist assumptions like these. We liked the Internet better back then too!
You’re taking something that’s objectively good, and saying it’s associated with white people, and implying that’s bad. So not only do we collectively lose the objectively good thing (because it’s now guilty by association of racism - the greatest social crime of our day), but you’re also erasing non-white contributions to that good thing. It’s an own goal all round.
Well as someone who doesn't fit into that pigeon hole either, but spent a bunch of time on Gemini in the beginning of Gemini (and I might have written something you use on Gemini lol), I disagree with you. A lot of Gemini really is just culture in-group signaling. There's nothing "essentially white, male, American geek" in there but it does harken back to the cultural trappings of the early internet which was largely white, male, and American.
Moreover I don't see what's "objectively good" here, is it the gatekeeping of Gemini or is it the fact that Gemini is mostly just people writing about Gemini and digital minimalism?
Ignoring that you're ignoring the "almost" in that sentence...What's the "objective good" in the retro-internet obsession? I was there, I enjoyed it, too, but there was no objective good to the gatekeeping, there was just a very small, blinkered culture that thought it would be able to enforce its standards on everyone who ever showed up.
If anything, the objective good is that rather than a toy for people mostly at universities, we now have an internet that's a communications medium for most of the people on the planet.
But they sure try. Latest iteration of coercion is called Google AMP.
Images, videos and other embeds are fine, because they're clear siloes; it's easy to make sure the website degrades gracefully.
JS is not, and fonts are not. These do not degrade gracefully.
Neither do complex styles in CSS.
Canvas does not degrade either.
Gemini is a bit too simple to be really useful. What is needed is something closer to old HTML 2 before all the browser warfare, with a few small extras tossed in from later versions.
Maybe with bit more of semantic format than representational.
I don't understand the level of interest in degrading some people have. Really I don't understand any level at all. What percent of users are running something other than an auto-updating browser?
I disable web fonts. That makes many sites very difficult to use. Yes, icon fonts are a part of that, but also the change in metrics messes up some layouts rather bad.
Non-american, non-white, and non-males were using the internet back in the early 90s too! Can't vouch for prior to that but there was at least one :)
I think of it as more of a "childhood memories" type thing rather than a bias thing - just like for a child everything they see is novel and new and interesting to them because they've not encountered it before. The internet was the same for a lot of us in the early days because it was novel and new and interesting because we'd never seen or used anything like it before.
Now people take it for granted and we're totally 100% used to it (and it has been commercialised sure), so it is not novel or new or exciting for many of us any more, and we wistfully look back on "the good old days" like we might do the same when we spent hours playing with lego or on an old 8-bit micro etc as a kid.
Except the web of the 90s had quirky stylesheets and images and pages longer than a couple of paragraphs. Almost everything people miss about the old web is Considered Harmful on Gemini.
Gemini is not about the pages without JS and stuff. Gemini for me and many other people is about having a specification that is simple enough that normal people can implement clients and servers from scratch and have fun doing it.
You can keep the whole spec in your head. You can develop both clients and servers from scratch just by leveraging some network and TLS libraries. It is friendly to retrocomputers that are able to do TLS but not beefy enough to run the Web.
> Why not just build some text web sites without ads or trackers?
If I am visiting a site I've never visited before, I have no guarantee what it will do or try to do until I load it/use it. Yes there are ways to control what sites do but the ways to block or manage them always seem to be changing and require keeping up on plugins, whitelists, extensions, which browser is better this week, which NewFeatureKit or TurningWebBrowserInToOSKit needs to be disabled or settings adjusted this time, etc. It's tiring.
We can have an inclusive Internet without this bullshit.
With Gemini, there's pretty much never going to be ads or trackers, because it's as physically impossible as it can be from a protocol standpoint. I can also tell people "install this Gemini browser and you're good" instead of having to explain how modern Internet properties seek to pervasively surveil everything and what needs to be done to prevent it.
> We can have an inclusive Internet without this bullshit.
We can but it won't be Gemini. Gemini has fought _tooth and nail_ against accessibility and accessibility metadata. And Geminauts still mostly produce GMI files even though Gemini says nothing about distributing only GMI files. The early Gemini days had optimism about using screen readers to view pages, but that went nowhere. The mailing list has consistently chosen a desire for ascetic, technical minimalism over accessibility. If those are the preferences, then it sounds a lot less like "inclusive Internet without this bullshit" and more like "our cool club where only the plain text kids hang out".
Gemini is really just anti-metadata in general. We don't even get anything resembling a Content-Length header because supporting delimiters can lead to extensibility which can lead to tracking mechanisms like cookies and ETags.
So educate me: since Gemini has limited formatting and is text-only, why can't a screen reader just read you the text? I'm admittedly ignorant here because I don't have vision-related needs, but to me the fact that you can depend on a .GMI document to be text only would make it seem like screen or text reader software could easily handle it.
> So educate me: since Gemini has limited formatting and is text-only, why can't a screen reader just read you the text?
In practice there's a lot of stuff that isn't just plain text. Things like embedded Figlet images for one. And languages for the other; UTF-8 makes it trivial to switch languages, but how do you get the screen reader to change which language it's reading? That's usually the problem. Plain-text fans, like CLI fans, often claim it's easy to wrap things or display things in other formats, but in practice, only other techie workflows actually become well supported. That's also why most Geminauts are techies.
> Gemini is designed to be difficult to extend without breaking backwards compatibility, and almost all proposals for expansion on the mailing list are ultimately shot down.
This could lead to some interesting adaptations. I'm thinking like the way usenet, news-groups were adapted to store and transmit binary files in the form of NZB's. So while the underlying protocol remains pure and simplistic (relatively), the overlays and adaptations build upon it may bring unweildly levels of complexity. Not inherently good or bad, but interesting to think about.
Is extensibility really the Pandora's box the author makes it out to be? Just because something is extensible doesn't mean it has to be extended in monstrous ways.
It's interesting to think about this: a simple and ordered space is the most seductive platform to start messing with.
When something is complex and chaotic enough, we start losing the drive to make it even more complex, as it takes too much effort and it's too painful to do. But when a system is simple enough... you have ideas, you think them possible, you feel the opportunity, you want to experiment and explore and create; and it's hard to resist. Therefore, software systems (and many others, maybe we could spend a few thousand pages discussing this for societies as a whole) develop a tendency towards complexity. You could describe programmers as data manipulators or complexity managers. Don't make your first job harder, don't make your second one impossible.
Yeah I agree with you here. Gopher had Gopher+ that allowed for quite rich extensions - the original gopher authors had early prototypes of a 3D "VR" interface for example.
No one really used it for anything though, and then most people moved on as technology moved on.
I think perhaps the Gemini folks are acting slightly paranoid here - like the minute they allow extensions they will be flooded with ads and tracking. I am reminded of https://xkcd.com/538/ - I suspect that really no one cares enough to flood Gemini sites with ads and tracking, and likely never will.
The generic <include> tag being discussed there might have been nice to have.
Also Marc Andreeson's suggestion of "a general-purpose procedural graphics language within which we can embed arbitrary hyperlinks attached to icons, images, or text, or anything" before JS or HTML5 canvas arrived on the scene is interesting.
An alternative protocol that explored some of these evolutionary dead ends of the early web could be fun.
You could very easily make a browser that displays image links inline. Contrary to what is suggested in the sibling comments, you don't need to extend the format with special syntax for inline images.
I can appreciate your desire for images, but I like the lack of images!
I like the idea of text-only, for the same reason I like retro games: you can't hide your lack of real content with flashy presentation.
I think if people look at tech like this as the 'next web' they won't get it, but I think of it as something completely different: using gemini is more akin to reading a book to the current web being more like watching television.
I can like television while also reading books.
And to beat the metaphor to death, most of my favorite books do not have pictures.
I would love to see a world where a vast trove of writing was available in a constrained environment like this.
Lots of people have been spreading misinfo that Gemini doesn't support inline images.
Gemini doesn't support auto-downloading of inline linked content. You click an image to load, and the image can be displayed inline if you choose a client that does so.
That image can load inline, in a new window, be printed out twenty niles away and delivered by carrier pigeons, etc; presentation is up to the client.
For anyone reading this and jumping into Gemini for the first time, I made a micro social network which already has a nice little community – come hang out on Station: gemini://station.martinrue.com
Gemini is great, but I'm would love one step up from this. html/css support, no JS. or Markdown with CSS styling or something that's just a little more than plain text but still close.
I've been meaning to try and make my own browser that only supports html/css or Markdown and maybe its own protocol like Gemini's
Gemini's text specification is kind of a subset of Markdown -- it supports three levels of headings, block quotes, bulleted (but not numbered) lists, preformatted text, and links (but not inline links).
While I'm a fan of richer text styling -- I'm often the dingus on HN being downvoted for suggesting that web fonts are not an intrinsically bad thing -- I think Gemini is mostly just fine the way it is. The thing that's kind of a blocker for me is, of all things, no way to emphasize inline text. Changing text from regular to italic/oblique carries semantic meaning. If you could mark it with _underlines_ like Markdown, for instance, it could be just like some of the other Gemini text that's described as "strictly optional for clients to do anything special with".
In the very first days of Gemini's design, I was arguing with other implementers for using a large subset of CommonMark (no raw HTML, no inline images, but pretty much everything else) as the preferred content-type to serve over Gemini. In the end, the goal of ease of implementation for client authors won out. The main arguments were that while there are Markdown parsers for many languages, most of them are narrowly aimed at converting to HTML; Markdown has known ambiguities; the line-oriented markup we ended up with was a strict improvement over Gophermaps while remaining equally simple to parse.
I also miss inline emphasis and other kinds of text-oriented typography, but the reasons gemtext omits them are fairly good.
I come to think of it, this is a client problem (specifically the client used to write the pages): gemini is natively utf8, bold and italic are present in unicode!
In general, on gemini, all "problems" are with the client and not with the protocol ;)
That would be pretty cool, but I do think that having CSS would be worse for what I like about gemini.
To me, the point is that you don't need style if the idea and writing is great.
It would be lovely if something like gemini caught on. I'd love to discover a treasure trove of great writing and ideas (again). That's how I felt in the early days of the internet, and it was amazing.
To me, the current web feels like walking down the street while being verbally accosted by a gauntlet of strip mall sign spinners.
If I see an http(s) URL, I have no idea whether I'll have to play whack-a-mole with selective attack-surface expansion (enabling JS, media, SVG, third-party JS, or JIT) for a rich text document to load.
If I see a gemini:// link, I know in advance that a singe download will give me a single readable page, and that it probably won't make my old laptop's fans spin or kill my battery. This significantly improves usability.
Definitely a cool idea. I've thought a lot about this idea as I'm sure many others on this post have. A protocol for just HTML/CSS (and a lightweight bridge) could be cool to see. This desire was why I was initially on board with AMP (before I saw the bigger problems with that format.)
I've also thought about a JSON-only protocol. In my mind this would make it simple to do AJAX-type interactions for loading content (the core benefit of JS to me) while still excising JS in its entirety, but I haven't thought it through very far admittedly.
This is the first time I'm hearing about Gemini and I'm really intrigued, especially how it's handling input and identity. There's potential here...
I've been a reader of this gemlog and a fan of Gemini, I have been looking at minimal featured Gemini servers (everyone seems to be building one) and looking for one specifically that offers an http mirror, text and maybe inline images only. Particularly I'd like to do everything in markdown, does anyone know of a good one?
> clients MUST NOT automatically make any network connections as part of displaying links whose scheme corresponds to a network protocol (e.g. links beginning with gemini://, gopher://, https://, ftp:// , etc.).
I'm aware, I meant inline images in an HTML mirror.
Looks like this might not be exactly what I'm looking for though. It's an HTTP to Gemini proxy, I'm looking for a Gemini to HTTP proxy, I want Gemini to be a first class citizen. The only reason I want to be able to deliver over HTTP is in the event someone doesn't want to mess with Gemini, but Gemini will come first for me.
On a related note, it seems strange that I can't access sr.ht content over gemini. I know that it's not quite trivial to render something suitable if you're starting with DHTML (lol) but it seems like srcmpwn would be the kind of person to bother doing it.
It's not OK in the sense that it is a spec violation, at least. Some clients have images that load in-line on mouseover or click, which isn't a spec violation. You could do that with HTML.
If people specifically sought out your client because it automatically renders images inline, who could be bothered to get upset about a "spec violation"? The author of Gemini? What would he be able to do?
Probably someone would submit a PR complaining. A lot of folks in the gemini community are strict about the spec. In my view, they would be justified in doing so — I think adherence to specs is important in any domain, not just gemini.
Drew himself threatened to block a Gemini agent that fetched /favicon.png on every page load. I tend to agree with the idea that Gemini should not be extended, as there's always HTTP+HTML for that.
I kind of agree with him, despite being of the opinion that optional inline images should be allowed by the spec. Fetching favicons is going a bit too far for my tastes.
I definitely see situations where Gemini wants to be as bandwidth-friendly as possible, and certain clients auto-downloading resources from the same server would make it difficult to estimate costs/resources/etc.
However for external images available over http/s, I think that should be up to the client.
I keep asking this question and not getting a useful answer. Someone made a Wikipedia bridge for it, which is the best thing I've seen so far, but it took me a while to find it.
Most Gemini users with their own pages seem to be coming at it from the perspective of 'you are someone who is already interested in me, so you be motivated to investigate my Gemini address.' But if I don't know anyone else who is using it, why do I want to click on a bunch of random links with no context in hopes of finding something interesting? The darkweb has a similar discoverability problem though there it's more reasonable to assume that page owners may not want to be found by casual users.
Gemini is gaining some traction in Tildeverse communities. I run a Gemini server on http://Ctrl-C.club that has a pretty good number of users posting things: gemini://gemini.ctrl-c.club
I think the problem is that there is a pretty narrow set of highly technical people using Gemini, the kind that exists on so many other communities on the regular web (like here on HN)
Not much, unless you follow some of the bloggers on it. It's a nice idea, but doesn't really have anything to recommend it over just making a website without javascript and ads.
I've already made a Gemini friend on gemini://station.martinrue.com/ (I think one of the benefits is the self-selecting people you meet in Gemini Space) he has lead me to gemini://cosmic.voyage which is a collaborative creative writing exercise and gemini://konpeito.media/ which is an exclusive Gemini mixtape.
Gopher was my intro to the internet. Coming from BBSs it was very natural. I loved Gopher and I am happy to see it's ideology progress to the next evolutionary step. I am aware that that the protocol still exists, but I'm hoping Gemini can bring back the desire for text content. I don't ever want to watch a video of a solution that can be written down in 3 lines.
It really isn't. If you want people to use it, you have to provide some utility. If you just want to make a point about how things could be different, you may as well print up stickers that say 'get off the internet and talk to people in person'.
I just browsed around a bit and was left annoyed by the lack of styling. I clicked on a link that I knew was supposed to bring me to another Gemini site and yet I couldn't tell that's what happened. Until I compared the domains, I thought I may have still been on the same Gemini site.
On the web, even sites that use templates tend to have customizations that make them distinct (even if it's just a logo change in the header). As long as no one is intentionally trying to trick me, I can tell without thinking about it when I've clicked a link that's brought me to a new website.
Markdown-like maybe, but certainly not markdown. Markdown itself has tons of rabbit holes and is deceptively complex, and that's before you get into the issue where you can just insert HTML as HTML.
I really wish more people would give AsciiDoc a shot, it's basically a stricly better Markdown with awesome tooling, sadly it lacks the general in-app support that Markdown has accumulated.
Overloading names is frustrating. It's also disappointingly ubiquitous, and whether or not it's a problem depends strongly on your context and expectations. For example, the English dialect I learned growing up exclusively used "autumn" to refer to the season after summer and before winter. Every time I see a phrase like "we will complete this project after the fall" I end up parsing it as something like "after the fall [of civilization]" and have to manually correct.
Presumably when people give something a name that has conflicts, they themselves don't see it that way. This makes it an especially difficult kind of problem, since you have to then argue that a) people like you (but unlike them) have a problem and b) this problem is worth caring about and maybe even c) this isn't going to happen all over again when the new solution is troublesome for some other group.
I find that societal issues fitting this pattern are a real sticky wicket generally.
for me the real benefit of gemini is the separation from the rest of the internet, a part where only the technically savy can go to escape what internet has become
So we created our own markup language, where each "tag" was on it's own line, with the first character indicating the type of tag. Only HTTP GET requests were supported, all output was in color CP437 text, and there was really limited layout support.
After a while there were maybe 5 or 6 different clients and maybe 10 different people had created websites, including a forum, a chat app, an RSS reader, a search engine, and some pages advertising various software people created. Sometimes it can be really fun to start "from scratch" with something simple.