That's a no from me. I am not a huge fan of emoji. Especially not in the cli or editor.
Emoji are hard to type on non-mobile platforms, and are difficult to search for in logs or files. Often they don't render properly. They're near impossible to deal with from the command-line.
Not to mention the accessibility issues.
I see no reason why these cannot be accomplished with tags/flairs such as [critical], [bug], etc. These would also make sense to those not familiar with the Gitmoji system.
Don't make stuff harder than it needs to be. Emoji look flashy, but that's about it. Please use good ol' plaintext.
I want to emphasize on that. Why do programmers like plaintext, often to the point of wanting ASCII instead of unicode despite its international awesomeness?
That's because we are called when things break down in the machinery. Monospace ASCII may look boring, but it has a precious characteristic: there is (almost) 1-to-1 mapping between the inner representation of a character and its screen representation.
Most programming fonts make sure 1 l | i I all look different.
We make sure we can spot the difference between `'"
We know of the few remaining pitfalls, like space and tabs, \n and \r, the various escape characters, the infamous \0.
Even on that limited representation, there are many things to be careful of. And thing is: this representation is just the start of our work! all the rest hinge on that!
Unicode is full of invisible character, potential diacritics, similar-looking glyps, control character that can break the display in a lot of annoying ways.
We made a whole ASCII-based ecosystem because this mapping is precious in many debugging cases. Whatever one does in the (and I say that non-sarcastically) really wonderful and awesome world of unicode, remember that we are going to often only see it as a bunch of 0xF0 0x9F 0x92 0xA9 (<- ASCII rendition of the "pile of poo"" emoji).
Emoji are great to make some apps prettier, some chat concise, to add non verbal communication to online chat. They are not a tool to help developers' life.
Only programmers from English-speaking countries like ASCII. There are millions of us living in other countries who are thankful that Unicode exists, and who are constantly annoyed that a some of our tools don't support it well enough.
As someone who grew up bilingual in a country where it's not uncommon for people to speak three or more languages, I hate the idea of being prevented from using one of the languages I speak while doing my job. Especially when I'm building products that eventually get translated into five or six different languages, all using different scripts.
This is one reason I prefer using GUI applications over CLI applications: Unicode support is terrible across terminals and CLI apps, while most of the commonly used GUI frameworks are okay.
If your app has no issues rendering हैकर न्यूज़ (Hacker News), then it should have no issues rendering Emoji. If your app cannot render that bit of Hindi text, then the problem is your app and not Unicode.
As a non-native English programmer, I disagree. I love ASCII despite it lacking many characters from my own language. Sometimes Instead of the Ç I expect, I'll get a Chinese character. What do you do then? Well you switch to an ASCII-only mode and debug the encoding by looking at the bytes. You go one level of abstraction lower than the unicode.
The ability to do that is precious.
> If your app has no issues rendering हैकर न्यूज़ (Hacker News), then it should have no issues rendering Emoji
First, that's not true. Many fonts have partial support or an app may have an issue with displaying mutli-colored glyphs. There are many possibilities for this partial failure.
> then the problem is your app and not Unicode.
Yes. Precisely. I don't think unicode or emojis are problematic per se. But they belong to a higher layer than the one a lot of (non-web) programmers spend their times in. Unicode support is HARD. ASCII support is easy. Making ASCII tools is easy and promises an ecosystem full of useful tools.
> This is one reason I prefer using GUI applications over CLI applications: Unicode support is terrible across terminals and CLI apps, while most of the commonly used GUI frameworks are okay.
And that's fine, but realize that by doing so, you depend on a layer that relies on an ASCII layer. Tools that require unicode can't be used there.
I think this is more common than the HN crowd assumes.
As someone who develops multi-lingual web sites, I can say that the vast majority of the people who visit the sites and then request a language other than English still have their computer/browser configured for English.
This is completely absurd. English is my only language but I would never hope for something like this. We're not supposed to cater to the needs of our tools; we're supposed to make technology cater to our needs. Following your logic to its natural conclusion, think about how much programming and debugging effort could be saved worldwide if we just stopped using machines and their features altogether.
As a counterexample, when I open some lib's repo and see comments in code are non-english, I just close it and don't use the lib. Not because I'm a racist, mind you, but because it complicates trying to understand/modify it by unreasonable amount of time. When I'm working I have to deal with spec requirements in other languages too, which is also troublesome. I kinda agree with gp on this, things would be much, much simpler.
Reminds me of the time when StarOffice became OpenOffice.
As StarOffice was developed in germany all the comments where in german. This hindered development for several years when the project became more international before alls comments where translated by volunteers.
You learn the language to communicate with everyone else on the planet. Computers are tools for manipulating and transferring information so spend time learning their language.
Instead of wasting colossal effort catering to needs of everyone let everyone spend some time learning the language of everyone else.
Yes, also my problem is not understanding every language in the world and how to use a Chinese keyboard to type this in. (Yes, I know this is not, I'm being facetious. OP probably cannot use it either. If not, substitute 3-bulsik Korean.)
The one thing English has going for it is the ease of typing it on any keyboard everywhere. Romanizations are fine too. Heck, even Russian is a bit problematic. Much more CJK, Hebrew, Arabic and Indian languages.
> The one thing English has going for it is the ease of typing it on any keyboard everywhere.
Is that an inherent property of English, or because we haven't invested enough effort into making non-English keyboards better?
Here's an interesting bit of data: ever since Google added support for Hindi in Google Assistant, it has become the second most used language used for voice search globally [1]. I don't know about other countries, but there's definitely a demand for non-English languages on computers in India.
> Romanizations are fine too.
No they're not. It implies you have knowledge of two separate languages, two separate writing systems, and the ability to convert between them. This is not always the case. People do write Romanized Hindi (and Tamil, Kannada, etc) in India, but that has more to do with the lack of native keyboards than anything else. With the rise of mobile devices, more people are writing Indian languages in their native scripts, thanks of the excellent work Google and Apple have done with their keyboards.
I'm as much a native English speaker as I am a Hindi speaker, and I'm entirely unwilling to accept a worse experience for either of these languages on my devices. Language is part of people's identities, and you can't really divorce language from script without losing something of its character in the process. People across the world have protested and rioted for their right to use their own languages. Governments are elected based on language issues. Supporting your target audience's primary language is part of your job as a good engineer, whether your target audience is non-tech people or other developers.
Where is this unwillingness to support multiple languages in software coming from, anyway?
> Is that an inherent property of English, or because we haven't invested enough effort into making non-English keyboards better?
It is an inherent property of English, at least in some way. English doesn't use diacritics or letters like the German "ß". It is the least common denominator of western languages.
AFAIK, languages that don't have English letters use an entirely different script, and considering that most programming languages, command lines, etc... use English, you need to be able to write English in addition to your native language.
English also have the interesting property that it is really simple to render, which makes it ideal for embedded software, low resolution screens, etc... Chinese a much too complicated, and even Hindi requires you to combine things.
> Is that [ease of typing it on any keyboard] an inherent property of English, or because we haven't invested enough effort into making non-English keyboards better?
Not of English inherently, more a feature of a language that uses a small set of symbols with many different combinations. The simplest and easiest approach is to give each button a symbol; it's also typically faster than typing on, say, some sort of touch-input keyboard. I agree that there is demand, but English still makes sense for development.
> People do write Romanized Hindi... in India, but that has more to do with the lack of native keyboards than anything else.
And I am stuck using SI units for science, even though I have a preference for American units. Compatibility is important, and as you just stated, people have adapted.
> Where is this unwillingness to support multiple languages in software coming from, anyway?
Even if many programmers aren't dealing directly with the rendering, UI design is a huge pain. Coupled with the fact that many countries with languages more difficult to support have fewer users, it can be a simple business decision.
I can see why a consumer-friendly interface probably should have multi-lingual support (especially if it has a large, non-English user-base), but development tooling operates at a lower level where more computer-friendly representations are necessary. It also has to be able to all work together under the hood, which requires a common language. English ended up the lingua franca, partly because computers were developed largely in America and partly because English text happens to simply be easier to represent than other languages, at least for computers.
Not to mention that transliterations aren't standardised (or rather, there are competing standards, and casually/colloquially people don't use any of them).
Even popular films released with Latin-alphabet names don't use the International Alphabet of Sanskrit Transliteration (IAST) for example.
It makes it hard as someone learning Hindi, because I might read or someone tells it's spelt 'k, u, l', but actually it's 'ka, la', i.e. kal or कल (yesterday/tomorrow) for example.
(And that's a common example, becauss IAST uses 'a' for inherent vowel, but many people write 'u', which is arguably often closer to the right sound if read as English, but that's probably not a high priority goal of IAST.)
We're talking about programmers; they have to be able to type english words because almost every language uses english keywords and most command line tools use english, etc. Since everyone has to be able to type english words, why add new character sets to the requirements?
EDIT: I guess then that also "café", "façade" and "señor" can retain their original spelling. The question is: where does it stop? will latin-1 suffice?
You most certainly can. a-z A-Z 0-9 and , . ? ! ; : - ' " ( ) should suffice in nearly all cases apart from anything hyper specialized. You could probably write the majority of all English literature with only those characters, and there's more characters I left off.
What other characters are "widely used" in English texts that are missing from ASCII
Accents and diacriticals are far from "hyper specialised", they are how many people are taught to write, and want to write.
And - is not a dash, it is a minus. It does not suffice as a replacement for a real en or em dash in a properly typeset text. This is also not "specialised", it is how English is correctly typeset.
And yet accents and diacritical marks are not widely used nor any longer exclusively "proper." You're fighting a losing battle. Language is changing underneath you, no matter how tightly you cling to it.
What battle are you referring to? A large majority of the world's population is speaking languages containing characters outside of ASCII. ASCII has no relevant future.
But this subthread is specifically about ASCII and English. So you point, while valid and true, is not relevant to this specific subthread.
I respectfully suggest you read my posts in this subthread because I don't believe you have. Instead, you appear to have taken my comment on accents in English completely out of context.
I did read the thread. It was about the irony that ASCII doesn't even cover English. Then you start to talk about a loosing battle. Still unclear which battle. Are you referring to the evolution of the English language into a language only using ASCII? Because if so that's completely off topic for the thread.
I can only think of no more than a few common English names that use diacritics, and most commonly drop them. Remember, this subthread spawned from discussing ASCII and English, not other languages.
I think you'd have to pretty deliberately misinterpret my comment about accents being uncommon in English to somehow think I was suggesting you change your name.
Source for claim many people are taught to use accents and diacriticals? Many people seem to have the opposite experience of being taught there are 26 unadorned letters in English.
I argue "Bēowulf" is does not fall under "widely used." Even my high school version of it it was printed "Beowulf." I'm not claiming "ASCII contains all characters used at all in English," just refuting the claim that you can't "write proper English" with ASCII and that it is missing "widely used" characters.
You write just like everyone normally writes, by dropping the accent. I only see such spellings when people are trying to convey some sort of sophistication, rather than in casual, day-to-day use.
Precisely. Even when writing by hand, where there are no character encoding limitations, I've never seen anyone add the accents to "fiance" or "cafe" or "resume" when writing in English.
"I only see such spellings when people are trying to convey some sort of sophistication"
McCafe and Buckingham Palace would want the "sophistication angle". Both spellings work obviously, but IMO (and the majority of English books) "cafe" is the "correct" spelling.
Other languages figured out good compromises, e.g. Dostoyevsky is transliterated from Достоевский; the name of China's current dictator doesn't even render in my browser, but is transliterated to Jinping Xi. German passports have the German version for humans, and a standardized, transliterated version for computers. Gößmann becomes GOESSMANN. I see no reason why the same thing can't happen; just drop the accent from the E. Also, for some reason, my browser shows that E as having no accent, if you want an illustration of the problem.
A nitpick: Xi is his last name, but Chinese names are generally transliterated in the Chinese order of LAST FIRST ("Xi Jinping"). If you do a verbatim Google search for "Jinping Xi" you'll get around 500 results.
If a Chinese person adopts an English name, they'll put it first, however. Eg "Jake Ma".
I'd be surprised if any one of the 500ish English-language books on my shelves could be typeset entirely in ASCII without some compromises. Most are just fiction, too, and few could be characterized as "hyper specialized".
My system (windows 10) has no issues with Hindi, but doesn't always render emoji. That said, when Hindi started to exist and how many countries have emoji as an official language?
I was keying in my confirmation code into the self checkin machine to get my boarding pass the other day. The code was something like EFO123. Was that a letter or a digit? I got it wrong, fortunately the software said it interpreted 0 as O and gave me the ticket.
Now multiply this by a thousand with Unicode exact lookalike glyphs.
You can still have problems with them. They propose to ID a commit type by the first character.
Arabic is written from right to left. Which emoji is the first character of this sequence? It actually depends on the position of invisible LTR control characters.
I once had an issue where GTK-2 couldn't display any characters that were in ASCII/latin basic. It was down to Pango dropping support for the monospace font I used for that range of characters. In my opinion the most reliable part of your system is the part you put the most attention on. ASCII is only the most reliable if you choose to use it as the basis for your system and then ignore support for everything else.
Although I do agree the one character to one codepoint mapping of ASCII is useful, things like terminal escape codes already add a layer of complexity that makes it hard for me to imagine a scenario where you can't render U+1F4A9 but can still render things like ^[[31m found in almost every cli tool.
Wait, I thought the goal of it is not to write emojis per se but rather use the shortcodes. That way, it's readable even when emojis cannot be rendered and it even makes search easier. So a typical commit message would read like that:
`:bug: Fix login modal`
It doesn't break anything and when interpreted (github, gitlab, gitkraken, etc...) it displays a nice looking emoji.
TLDR from the Git repo: Describe your changes in imperative mood, e.g. "make xyzzy do frotz" instead of "[This patch] makes xyzzy do frotz" or "[I] changed xyzzy to do frotz", as if you are giving orders to the codebase to change its behavior.
People have different opinions on what tense to use, but the one that scales better would be in imperative mood. See example here: https://github.com/facebook/react
How come? As I understand the comment, the user is questioning the tense of "fix" and I'm providing a reason and an example as to why someone would use imperative mood in their commit message.
Edit: Nevermind. I must have been tired yesterday. User wasn't questioning the tense, but pointing out that the part of the message "fix" making the :emoji: redundant.
Finding the right emoji is easier than on my Android phone as Windows lets me type the emoji name. Why emoji search is missing on Android... who knows.
That still seems much harder to me. Instead of just adding "PERF" to my commit messages, and later grepping for "PERF", I have to pull up a separate window each time, and remember that performance changes are indicated by a lightning bolt.
>remember that performance changes are indicated by a lightning bolt.
Tbh that's a very weak consideration, as it's ingrained through standardization throughout the codebase. And if this is for codebases you're not familiar with, then PERF is hardly gauranteed to.produce useful results (it certainly isn't standard in my own codebases).
Much more dangerous is people expanding on emoji-tagging with things like COLOR and similar VARIANTS, or an explosion of tags like being suggested in this article, making it even harder to use/memorize the system
But worst of all is that emoji are ugly and ruin the aesthetic of anything they come in contact with
Git is for programmers. What other demographic should we be caring about?
If I'm an Android dev, it might bother me that application suspend is deliberately mishandled on tons of lower-end devices to improve battery life. Should I just ignore those users? If I'm making an email client, it might bother me that Gmail's IMAP support is crap. Should I stop supporting Gmail in my client?
Tools make concessions for the platforms and environments that they'll run in. They always have, and they always will. You get to ignore platforms if you're comfortable saying, "my stuff isn't for the users of those platforms." But so many programmers are on Linux, it probably isn't feasible for most OS projects to ignore them.
None of that means that Linux shouldn't have better emoji support. It should. It's embarrassing that it doesn't. But, you have to deal with the world as it exists today.
"Programmers don't like this (shit) tool addition you wrote because it doesn't work with the existing (shit) programming tools" is a better paraphrase TBH.
Cmd+Ctrl+Space brings up the emoji palette on MacOS. This palette is easy to use from the keyboard and works in the terminal too.
Some MacBooks famously also have a TouchBar that makes emoji easy to type.
I can't remember the last time I had issues rendering emoji, and I have no idea why they'd be difficult to search for or how they would pose any accessibility issues.
That keyboard shortcut brings it up, unless you've already brought it up on a different workspace, in which case that shortcut does nothing (until you find the previous workspace where it was brought up, and dismiss it there).
I had no idea it was usable from the keyboard until I read this comment. There's a search box but it's invisible and hidden until you type blindly at a window that doesn't look like it accepts keys.
Terminal mostly supports emoji, but has many bugs. Some emoji, after being typed and deleted, don't properly restore the cursor to its original position.
ASCII may be ugly but works 100% of the time for me. I tried using emoji on macOS 10.14.5 Terminal just now and ran into every issue you said you don't encounter.
Except, that's not really "easy". I prefer typing with buttons, not having to pull up a picker and search for each character. As others have mentioned, I don't want to pull up some window to grep through a commit log; I work very quickly in the terminal, and this would be an awful interruption to my flow.
What’s special is that they are inherently visual instead of symbolic. What happens when the culture shifts and :bug: is replaced by :pile-of-shit: ? We now must remember that 5 years ago, we used :bug: to mean software defect but after that one blog post we all use :pile-of-shit:?
It’s like the second derivative of the floppy-disk-means-save problem, but pushed into our terminals where despite their drawbacks things must generally have a relatively fixed meaning devoid of the context required to interpret purely visual signifiers.
If you mean that performance improved just write PERF instead of :lightning-bolt:
On macOS, emojis are actually easier: Control+Command+Space in any text field, then type a word to filter/search the emoji list [0], or expand the popup to a full-blown Unicode browser [1].
Emoji are hard to type on non-mobile platforms, and are difficult to search for in logs or files. Often they don't render properly. They're near impossible to deal with from the command-line.
While I agree that emoji shouldn't be used in git, it's worth noting that the limitations you point out are platform-specific, and not universal.
in macOS, emoji work very well in any app that chooses to allow them, including the terminal. Typing emoji is as simple as double-clicking on the character you want from the pane that pops down from the menu bar.
Linux may not have first-class support for emoji, and Windows may not have first-class support for emoji. But those are limitations of the operating systems, and not of the emoji, themselves.
Japanese culture has used symbols (not kanji, more like dingbats) for at least 20 years if not 30 to 35. They use them in business documents all the time. The most common are probably ●○◎■□▲△▼△. In fact like in the USA a generic name might be "Smith" in Japanese they often using 〇〇さん.
Further, they've been using emoji for the last 20 years. So telling them they can't is basically telling all of Japan they're 2nd class. Their standards and ways of communicating don't matter. Lots of programmers from cultures where emoji and symbols were not in common use dismiss them because in their culture they were not in common use just a few years ago but other cultures have been using them much much longer. As such, banning them is unintentionally racist.
I don’t know if “racist” is the right word for what the GP poster meant, but there’s certainly a concept of “complaining about something while ignorant of the fact that it isn’t a problem outside of your own particular culture.” Whatever the adjective for that is.
It’s the same property that applies to the people who, 20 years ago, initially argued for UTF-8 specifically because of how efficiently it encodes from-ASCII codepoints, ignoring that for most text in most languages on Earth, UTF-8 is less efficient than UTF-16. (Of course, UTF-8 won out anyway, but more because it’s easier to parse, auto-resynchronizing, not limited to the first 16 planes, and because we ended up inventing all sorts of XML/JSON-based document formats, like ePub and OfficeXML, that cause even documents in other languages to be mostly from-ASCII codepoints by weight.)
It’s also the relevant property for Americans who complain about large downloads for games, or who have cellphone service but don’t have data, and think “the world” needs solutions to these problems, when really, North American ISPs are uniquely horrible (to the point that a game developed excluding North America as a potential target market could probably do things we’d think of as “impossible” right now, like live-streaming mesh co-AR or something.)
> I generally find those against emoji to be unintentionally racist since other cultures have been using various characters for decades if not longer
Sure. Some writing systems use singular glyphs to represent an entire word. We're not talking about those. We're talking about using using a picture of lipstick to represent updating UI and style files and other silly suggestions. I'm not sure how you confused the two.
Why do people feel the need to standardize these things?
Emoji used occasionally are fun and refreshing and can draw attention to important commits. Emoji used by default become line noise.
To paraphrase Goodhart’s Law, when a joke becomes a standard it ceases to be fun.
I also object to the details of the categorization. Some tags split up tasks that really should be done together (eg fixing bugs and updating tests, or adding features and writing docs) and some tags are duplicates (eg :art: and :recycle:).
> Why do people feel the need to standardize these things?
I'm of the opinion that these kinds of silly arguments seem to be made by people who like hearing themselves talk. They want their name on something. They want to feel like they were a part of something. It's all ego. The rest of us are happy with pragmatism and archaic things like text, which is made of glyphs which form words which we speak.
I'm not against automation in principle, but in this case you don't actually need to standardize before you automate.
Observation: people are prefixing their commit messages with emoji!
Automation: let's group commit messages by emoji, with an "uncategorised" bucket for emoji-less commits. Job done, that's all you need.
The "standard" that defines which emoji to use and when can evolve completely separately. I'd argue that this standardization is actively bad, as it adds to the learning curve when contributing to a project, stifles creativity (you may only use these emoji in exactly these ways) and wastes time by encouraging bikeshedding (people adding new categories, or arguing over existing categories).
> With gitmoji others or your future self can simply look at the associated emoji and straightaway catch the intention.
Just like a word does.
> The [bug emoji] emoji is easily recognized as a bug by most people
The word "bug" works for me.
> but will [tulip emoji] immediately signalize that code is removed?
Nope. Besides, we already have commonly used symbols for that, "+" for a line added and "-" for a line deleted. I'm not sure what a tulip has to do with deleting code.
edit: hackernews removed the emoji from my post. (I put in the words for the images.) Another reason to not use emoji.
As for a flame, I don't associate fire with removing things. Frankly, emoji and icons may work for things but they don't and never have worked for actions. Apple, for example, never did come up with an icon for "print" that people didn't have to be told what it meant. Apple could have saved a lot of grief by using "print" instead of the icon. (The same for "delete".)
Back in my days at Symantec, the people making the IDE worked hard to come up with icons. They had a problem finding an icon to represent a CPU register, and finally settled on a picture of a cash register.
This alone goes to show that the emoji representations themselves are not even consistent enough either. Using a flame emoji, when all black, can look like a tulip instead; now the original intent of using flame is not even the question, it now is whether or not the symbol is interpretable.
> Apple, for example, never did come up with an icon for "print" that people didn't have to be told what it meant. Apple could have saved a lot of grief by using "print" instead of the icon. (The same for "delete".)
Do you have any references for this? Every print icon I've seen Apple use looks like a printer, and every delete icon looks like a trash can. Is there an article or something saying these caused confusion?
My reference is when an Apple salesman came by my workplace around the time the Mac was released. He proudly showed off a sheet of icons, claiming they were all obvious, to a group of maybe 15 of us. One pointed to an icon, and said "what's that box of kleenex mean?" The salesman says "print, it looks like a printer". We all laughed and said it looks like a box of kleenex. The salesman didn't get much headway after that.
Printers come in all kinds of form factors. Besides, even if it was recognizable as a printer, a printer is a thing, not an action. Why wouldn't it mean "configure the printer"?
Of course. The notion that these things are "intuitive" is false. Recall all the laundry icons you see on clothing? Nobody knows what they mean until they're told, even though they're standardized.
I don't like text that looks differently for everyone depending on the platform they're on.
Emojis have shown us already that vendors are willing to change their appearance depending on current trends or political climate. Gun becomes water gun, salad becomes vegan salad. One day, if caterpillar becomes a butterfly because of some mundane idea, is it still going to be instantly recognizable as a bug?
We write code in formal languages rather than natural ones to stay away from ambiguity. Emojis are a straight downgrade.
> But we already have fonts that make the Latin alphabet look different on different platforms.
But the differences in appearance are abstracted away; the whole point of an alphabet is that it's a small fixed set of symbols. Conversely if you see 𓀪 on one platform and 𓀫 on another, you have no idea whether the difference is meaningful or decorative.
> the whole point of an alphabet is that it's a small fixed set of symbols
Emoji is an alphabet. There's a fixed set. I know they look different on different platforms but it's the same Emoji. Just like how an A here looks different to an A on Twitter. A vomiting face emoji conveys the same thing however it's drawn.
Emoji aren't a fixed set; there are new ones added all the time: 230 this year, 157 last year, …. When I see an emoji, I can't tell whether it's an existing one drawn in a different style, or whether it got added since the last time I memorized the code chart.
I've gotten into arguments over whether or not emoji belong in the unicode standard at all, and I'm not going to rehash them here.
What I am going to say is that regardless of whether or not emoji are a good fit for unicode, they don't degrade gracefully for blind users (or for sighted users on older hardware) and they're prone to rendering issues between different fonts that can mask intent.[0]
Everything bad about icon fonts also applies to emoji. All of the articles you've ever read about how svg icons are preferred[1] -- all of them apply to emoji as well.
So in general, I only use unicode emoji[2] if two things are true
A) I'm in a closed conversation that won't be copied and pasted around or shared publicly.
B) I know the exact platform that my reader will be using to view my text.
I don't think Git falls into either of those categories.[3]
[2]: I do heavily use emoji shorthands (:bug: :cat_eating_avacodo:) but these don't suffer from most of the same problems as emoji. They work as progressive enhancements for the platforms that render them, and fall back to readable text on the platforms that don't.
[3]: It's true that unicode fonts in general have issues when you get into non-English languages, but I don't advise people to avoid, say, Chinese characters, because: A) language glyphs don't change often enough to frighten me on accessibility, B) people who need to see them are likely already using devices that support their own languages, and C) we don't have a good alternative we could use instead.
why not? there is a canonical plain-text description for every unicode emoji. at least in theory, i see no reason why a screen reader shouldn't be able to handle emojis.
In theory, there's no reason why a screenreader can't. In practice, this depends on the combinations of applications you're using and whether or not your emoji dictionary is up-to-date. I suspect iOS/Android readers will handle them fine, I suspect older versions of JAWS will either have incomplete dictionaries or not handle them at all.
Blind HN readers, or just readers who have more experience than me are welcome to comment here.
The problem with emoji that isn't present for glyphs in other languages is that we get new emoji every year, which means that the standard just moves a lot faster than, say, the German alphabet.
Arguably a year is more than enough time to update a screenreader, but to be fair, how long did it take for applications to get to the point where they would pretty much all gracefully handle curly quotes across the board? Even some E-Readers used to have problems where if you searched a straight-quote, it wouldn't match curly quotes in the text. And, like, their only job is to make text work correctly.
> forced to rely on poor tools (I hate JAWS so very very much)
Blind Windows users aren't forced to use JAWS, at least in the sort of context where they're likely to encounter emoji (i.e. I understand it may be forced in some job settings). NVDA is freely available and Narrator is built in, and both handle emoji well AFAIK.
Disclosure: I work at Microsoft on the Narrator team, but I'm just as happy if you use NVDA.
> NVDA is freely available and Narrator is built in, and both handle emoji well AFAIK.
They may handle emoji, but they also regularly fail to handle other common screen-reader situations well.
I do understand the difficulty - most interfaces today are visually-oriented, and often don't include the necessary contexts for accessibility. This turns our screenreader programs into massive and complex programs that have to handle so many different formats and edge cases they become truly monolithic.
But I haven't found either of those to be up to a level of quality where I can put up with them on a daily basis. JAWS is simply the least-worst at the moment.
(Small examples: NVDA is similar to the sloths from Zootopia's DMV anytime you're doing anything on a network (whether not its the active window). Narrator's settings regularly get reset by updates, and will re-enable itself at random and unexpected times).
> NVDA is similar to the sloths from Zootopia's DMV anytime you're doing anything on a network
Do I understand this reference correctly? Are you saying NVDA is really slow when you're doing anything on a network, even when that network activity is in the background? Can you give an example? I've never experienced this nor heard of it. But maybe I'm missing some specific detail of the Zootopia reference. I watched that movie once, or rather listened to it without audio description (with my family), and I don't remember the DMV scene that well.
> Narrator's settings regularly get reset by updates, and will re-enable itself at random and unexpected times).
Have you reported this through Feedback Hub? I haven't experienced or heard of the problem with settings being reset by updates.
Yes, NVDA has some problems to do with network connectivity. It's a fairly well-known problem. It has been solved several times, and re-appeared several times, but getting the right trifecta of Windows version, network card and NVDA version means it can take somewhere in the range of a full minute just to say "Firefox".
---
Windows isn't my daily driver, mostly because Windows 10 has at several points reset _all_ the system settings with some updates, not just accessibility. Again, this isn't an unknown problem. It may happen less often now the team seems more aware of it, but it only has to happen once to render my computer nearly useless. It's less of a problem with Narrator, and more the tightly couple nature of Windows components and the decided drop in update quality in recent years.
The part about Narrator randomly speaking is something has appeared since the XP days. Now and then a single word will squeak through when the whole thing is supposed to be on mute. It's completely unpredictable though, so I wouldn't expect it (as a programmer) to be solved at any point soon.
Google Assistant is very far from most of the common screenreaders. Both in quality (better) and capability (worse).
JAWS has been known to crash when handed some unicode sequences. So now it simply ignores them and says nothing.
The alternatives tend to be worse than JAWS. Such as spelling out the codepoints in hex, or crashing before it gets anywhere near the unicode it doesn't understand.
> they don't degrade gracefully for blind users (or for sighted users on older hardware)
It irks me when people, including blind people, advocate against something on accessibility grounds when it's something that current screen readers can handle perfectly well. Another example of this would be saying that JavaScript-dependent sites are inherently inaccessible. We should save our accessibility advocacy for things that are actually problems.
The fact is that if we assume that people want to use little pictures to communicate, then exactly because emoji is part of the Unicode standard, we should encourage its use, as opposed to, say, graphics that may or may not have alt text.
Definetely a no for me. I also hate seeing commit message littering with emoji and feel like the author is too lazy to find correct words. I also disagree with the "visual cue" advantage, because the emoji looks different per machine/font/terminal/OS. Plain text looks consistent every where.
> Forces you to make smaller and more specific commits
There's no forcing anyone to make smaller and specific commits. It may be used as a reminder, or tool, but certainly not going to prevent someone from committing junk behind whatever emoji they choose.
Not to mention that a non-emoji based tagging scheme (e.g. “[bug]”, “[UI]”, etc.) would likely have the same effect (if there is any) that the author purports.
Absolutely not. Keep your emojis out of my CLI and my face, even, as far as professional areas of life are concerned. I see READMEs full of emojis, the description of the project is full of emojis... I personally dislike it, and it actually makes me reconsider digging more into that project. I might have prejudices, but it is quite childish to me. The only place where I am fine with emojis is instant messaging applications (for casual conversations). You can even have stickers there if you want for all I care!
Because it relies on a lot of cultural background. Which in the world of emoji changes fast. This won't be welcoming of new users when they have to learn 10 or more symbols for whatever you're doing.
:lipstick: for UI changes? Why? :computer: or :smartphone: as those are the usual interface used or ️:wheelchair: because you care about something accessible would feel better.
If we could have :pig: as well, we could at least do :lipstick::pig: for kludges on bad code...
I think you're right on the mark about the strong cultural context and arbitrariness of the symbols.
The value I see in emoji for this sort of application (as others have also pointed out here) is for jokes and maybe (rarely) highlighting lines visually. Obviously the joke dies if the use gets standardized, and the practical disadvantages far outweigh the advantages.
Nay! Emoji is slang that excludes a huge number of people, not just due to preference but also because the meaning keeps changing and can be confusing if not exhausting to keep up with. It skews young and/or Western and there's far more kinds of programmers than that.
Yes, emoji is literally Japanese. 絵文字 meaning 'picture as words'. Invented for use in Japanese pagers, later proliferating to mobile phones long before smartphones even existed [0].
If we cannot describe in the subject what we are doing, we should not commit it.
Icons are perfect to define actions (fixing bugs, improving the layout, etc) however, the commit is important when you are doing releases, cherry-picks, maintaining patches, etc.
I'm not against changes and sure the icons look very cool on the front page of GitHub or in `git log`.
However, most of the maintainers, who actually do things with commits, they work looking at the content. Be useful in the future, to understand what was going on, is the main purpose of the commit message.
I agree that a bug, a rocket (and others) can be used to improve the subject of a commit, eventually, the problems are:
-by introducing icons, are we creating a perfect excuse to be lazy during the commit phase?
-Will the icons makes our history of change the poorest?
Apparently Ubuntu 18.04 introduced native emoji support through bundling Google's Noto Colour emoji font and an emoji picker ui built into Gnome. Looks very similar to MacOS's built in emoji picker that I'm used to.
I'd rather not but not because of the emoji but because atm I don't like categorizing commits in the first place. I've just tried doing it on a project and it felt pretty weird. Instead of just grouping single atomic changes like I'd usually do, I suddenly also have to think about what category fits a commit best and what to do if a commit satisfies multiple categories.
Thinking about it, emoji could actually help with this since it's easier to assign multiple categories to a commit without wasting a lot of character space. Still, what I'd rather want to see become a convention is an annotation in the commit's body that simply marks whether it's a major/minor/patch bump (or none). Plus optionally a line for the changelog. (Optionally because it allows commits to stay small while also not cluttering the changelog.)
- you can glance the commit log wall of text and still get a lot of info, not something you can do with other forms
I don't like textual representation tho (i.e. :zap: vs :feature: ) and if u watch log in systems that can't present images it might be actually harder to understand
macOS VoiceOver reads one of the (but not necessarily the most useful in a given context) names of the emoji. For example, gets read as “caterpillar”. However, importantly: there’s no distinction that it’s an emoji when read.
Caveats: I have no idea what JAWS does here. I’m not a visually impaired user, but have done a11y work in the past.
It saves people from having to guess which subculture you come from, just so they can understand what a certain picture means for you. Emoji are very irritating for this reason: They are focused on the writer, and not the audience.
I have tried to use Gitmoji and other standards before on a few projects but I have since stopped. The motivation behind it makes a lot of sense but I can count the number of instances where I have used the emojis as a cue to find problematic commits with one hand.
That being said, I never really thought about the point about forcing you to ensure your commit is atomic. I definitely have become better at it but the habit has stuck even after using Gitmoji.
I am glad I am not the only one against this. Especially iOS developers seem to love to put Emoji in their tooling and git repos, but that's just my experience.
As long as there is a non-emoji description of what is happening in your commit, use whatever you want. It's not something that I would ever use on anything semi-serious but I'm not going to forbid people who want to express themselves that way.
The one thing that I will forbid them to do though is use it in branch names - and especially in the beginning of the branch (if it's in the middle you can use tab completion). Your commit is something that I can read or I can use its hash, but branch names are something that I have to interact with directly and I don't want to have to copy/paste the name from github or trying to figure out how to type this emoji.
It wouldn't have even occurred to me if you hadn't specifically mentioned it, which now makes me suspect it does have something to do with the fact a woman proposed it.
We have enough example of internet-outrage caused by tech-guys treating tech-women as peers.
That is outrage over "hostility towards tech-women" when what is actually happened was that guys criticized women equally as they would have criticized men. Even seemingly respectable organizations like the Linux Foundations have fallen prey to such nonsense fronted by (racist!) twitter lynch-mobs[1].
You'd think this be a pretty clear-cut case when considered objectively, but it has proven controversial, even here on HN.
In such a climate, it is only reasonable to pro-actively guard against such accusations. The troublesome part is that it's seemingly needed.
I had a short discussion about them just yesterday. It’s not a yay for me but also not a nay. I use icons for change lists in pull requests and release notes. But I use my own set of svg icons to have better control over the icons themselves.
I don’t like these icons in commit message subject lines because they're too ambiguous. If you adopt the imperative subject line style (Add class X, Fix issue XYZ, etc.) you have all the same benefits plus a clearer meaning. Also using a emoji plus a description like ‘:bug: fixes issue’ makes the whole benefit of short one character icons go away.
As the article says, I can see that the emojis align with the intended meaning,
but I'm pretty sure I'd not be able to guess the intended meaning from the emoji of many of these.
I'd think prefixing a commit message with [linux] or [hotfix] or [refector] provides the same at-a-glance benefit with less cognitive load when reading the commits.
I was about to go in fully to this style at most ~2 weeks ago.
It's another forcing function to be cognizant of what exactly you're working on at a given time. (Speaking from JavaScript land,) don't update the packages, fix two bugs, add a couple features, refactor some code all in the same commit. When things go wrong in a few months, your bisect will land right on that big blob of things. In the vein of focusing on things it's also nice when reviewing code. Ideally reducing the amount of times someone sneaks in a random 'bug fix' that eventually needs to be tracked down and added to the spec.
Also, the idea of being able to categorize all commits into their respective buckets between releases. A similar idea from the other commit message prepend style out there, but with emojis. Grep for all :sparkles: and :bugs: and you have your patch notes.
But, as I started with, I couldn't commit, (heh). Terminal + `zsh` on MacOS Catalina seems to refuse to format emojis properly, often resulting in console frustrations as spaces disappear and emojis overlap text. If anyone knows a fix, let me know!
Instead of grep for :sparkles: etc you can also just grep for any other prefix style. The biggest turn down for me is the fact that these emojis render differently on each platform. It’s hard to transpose meaning for an icon that is not static in its visual.
I’ve been consistently using it on my last 3 projects and I really like it.
It brings a simple visual clue to the content of a commit, it’s displayed correctly on all my git clients, and it’s just fun.
Also, you don’t want to have more than one emoji so I agree it forces to have more atomic commits.
I recently started using emoji to name tmux windows and I find it pretty useful. That being said this is probably pretty cool too. However, I could never see myself enforcing such a thing in my flow. Let alone trying to enforce it on others. It’s a nice thing if people want to do it though.
> A positive outcome of using gitmoji is the fact that it forces you to think through the content and message of your commits to a larger extent.
It really doesn't. My personal experience with people using them is that the content and message of the commits is the same quality as they are without them, but now they also have a pretty picture on the front of them.
It can encourage people to think more about their commits but I can't say I've seen a correlation.
I don't mind people wanting some "personality" in their commit one-liners, but it's the message and not the icon that adds value.
It may be useful, but only if you implement it consistently in the workflow. However, what will happen when someone new in the team comes along? I personally prefer to prefix my commit message with the issue number if useful.
In the example given on their website (the PR example), it makes no sense to me. It does not improve the quality of the titles. Did they remove animation, did they add it? Replacing the emoji with 'add' or 'remove', the title describes what happened inside that Pull Request.
Something does not have to be beautiful to be useful and readable.
I think this is a highly regional preference even without meaning to be. Emoji are more popular in Asia than in the Americas as part of normal conversation.
Probably because the entire generation of internet users right now grew up on smartphones before laptops.
Secondly, we already struggle using international keyboards to find things like rupee symbol (₹) vs the $, etc. Emojis are just next door to the effort.
I heard some ops willing them to have a clearer vision of what's commited but I don't like being dependend on them. I ended up looking what kind of emoji I had to use on every commit and it felt like a waste of my time
It sounds like you have several people discussing a change that's going to have a limited impact on outcomes, if any at all. Another warning sign is if people are having more fun arguing over the change than they would do actually using the new system.
Feels like a zoomer thing. I wouldn't do it for personal projects, and I wouldn't allow it for teams I lead. They seem nice on the surface, but they just become noise.
Ever install a popular node package? It barfs up a ton of emojis in my terminal.
It's a way of segregating people by age, then acting derisively about various groups in response to that. You identify bad character traits, call them out with the relevant "generation" phrase, and then sneakily imply that they have that character trait because they're within that age range (even when they're not).
The worst part of it is that this obsession with "generations" is almost exclusively tied to the US. In most countries, this concept is not part of the memescape at all, so it's even more confusing.
When you think about it a bit, you realize there just cannot be "generations" in that sense. People are continuously being born in each and every year. In a family tree there are generations sure, based on the levels of the tree, but not in a population.
The exception is special baby boom periods and their echoes, which do create some more "lumping" or births.
The worst is when people try to apply American generational stereotypes to other countries. Like, no, Hungarian "boomers" didn't grow up in plenty. The 50s were a period of terror and deprivation, not the "good ole days" of prosperity.
Coming from a culture without this obsession, it took me a very long time to learn the concept of generations. And I'm quite upset that I did; it's fairly deeply embedded in my mind, now, as I tied it to so many other concepts in a desperate attempt to make it make any sense. (Is a generation twenty years, or fifty? I still don't know.) It takes some effort to make that realisation now, but I remember thinking exactly those arguments when I was younger, trying to reject the concept.
Emoji are hard to type on non-mobile platforms, and are difficult to search for in logs or files. Often they don't render properly. They're near impossible to deal with from the command-line.
Not to mention the accessibility issues.
I see no reason why these cannot be accomplished with tags/flairs such as [critical], [bug], etc. These would also make sense to those not familiar with the Gitmoji system.
Don't make stuff harder than it needs to be. Emoji look flashy, but that's about it. Please use good ol' plaintext.