This goes back to the Netscape / SUN partnership. SUN allowed Netscape to use their trademarked name Java when they created JavaScript. Microsoft created JScript. When they decided to standardize the language into a spec, they did so with ECMA. The involved parties couldn't agree on a name so they settled on the highly original name ECMAScript as a compromise.
"Eich commented that "ECMAScript was always an unwanted trade name that sounds like a skin disease.""
I see what you're saying, though I think it misses the mark somewhat.
By analogy, the 'the web' does not only mean HTTP or even the exclusively just the various protocols involved. (to name a few on the client side: ip, tcp, dns, http, http/2, tls, ws/wss etc.)
The term also refers to the use of browsers, hosted websites and the culture surrounding how society interacts online with these sites. In short, 'the web' is an ecosystem just like bitcoin.
With that said, it's widely accepted that the proof of work system has not yet been compromised, nor has the network itself. However, let's not gloss over the fact that there's a rather established hacking problem in the space affecting a broad audience.
ransomeware, hacked exchanges, personal wallet attacks, online wallet attacks, 2fa attacks. Even a few of the most paranoid and technically savvy early adopters, have fallen victim to such schemes.
Kudos on a useful concept. Some feedback: Alternate pronunciations seems to go a bit wonky. For example it pronounces Ribosome [1] (/ˈraɪbəˌsoʊm, -boʊ-/) as ri-bo-some-bo :). Overall quite a nifty tool as many wikipedia pages do not have native pronunciation recordings. Could also see it helping casual learners of IPA.
You confused me quite a lot! Why did you "transliterate" /soʊm/ as "some"? Is there a dialect of English where "some" is pronounced as /soʊm/ and not as /sʌm/ or /səm/?
Maybe this is just a lot of lying with graphics, but it has always bothered me that Colorado has been a tech center on and off for decades, while California is nothing but truck drivers and secretaries. Yet it seems like Colorado is the last place people look for tech talent.
Having worked here for the last two decades, since I arrived in '94, I've never had real trouble finding a computer programming job.
(I don't count the time period when all I knew was Visual Basic and was re-training in another language while job-hunting, because really, that was on me.)
In the past I've been adamant on buying the LTE+Wifi iPads (because more options right?). In hindsight however, the cellular feature was barely used.
Have you considered an LTE mobile hotspot? It can be left charged (but off) in your laptop bag for days/weeks and only used for travel or as needed. I found it to be super useful and not much hassle at all. Also multiple devices can connect rather simply.
Lastly, a benefit of this sort of decoupling, is it allows for using a different network carrier to the mobile phone, thus effectively increasing one's LTE coverage.
I try a lot of stuff. I'm on my fourth iPad, second Surface Pro, my third Macbook Pro, my fourth ThinkPad. I have also had a Newton, Palm Pilot (various), Handspring, Nook (ebook), and Android tablets (most interesting being a 13" one from Lenovo with a built in projector)
I use the Verizon LTE network on my iPad pro all the time. When I'm on the train, when I'm travelling hither and yon. I don't browse/read/search on my phone (which I use for actually calling and texting people). Not saying that I represent a typical consumer but I certainly represent at least a cohort of 1 consumers :-).
I had a MiFi for a while (company purchased), but found it consistently had lower performance then the native networking of my iPad and it was easy to lose if I wasn't paying attention. Twice I encountered times when it wasn't charged when I needed it. On analyzing it overall I found that because it wasn't forced to be charged when I charged my laptop/tablet, it suffered from sometimes being missed when it needed to be charged. That the radio is by definition charged when the iPad is charged avoids that failure mode for me.
When travelling with my SPro4 I would sometimes 'hot spot' it to my tablet. Awkward but it works. In the inversion that I still marvel about the software availability on the iPad has always been better than the availability of software on the SPro4.
The SPro4 has a better (to my taste) drawing experience than the iPad Pro does. And both the iPad Pro and SPro4 are way better than previous stylus attempts on capacitive screens.
The iPad has a much better 'tablet only' experience for me, and a much worse 'docked' experience. SPro4 with a Surface Dock and a couple of UHD monitors is literally indistinguishable from the NUC i7 box I use as a desktop. There is nothing (yet) that I can attach to my iPad Pro to make it work like a Macbook Pro (I know I know, different OS, processor etc.)
As a result I create more in general on my SPro4, consume more on my iPad Pro. I travel with my iPad in my hand and my SPro in my backpack.
Clearly I have been unable to leave one behind and just use the other so there is some more convergence to go :-)
What are your LTE costs per GB? It hasn't caught on in Australia; you pay about $30AUD a month for about 5GB of total data traffic. If you have 5 devices you usually can't share it either so you either have to swap a lot of SIMs or that's $150 a month one for each.
In the US pretty much all carriers have moved to shared data plans where you can get a pool of data to share between all devices on the plan, which can include other family members.
I'm in Sweden and here you can pay $63/month to get a flatrate for calling/smm/mms/surfing + 100GB in EU/ESS, Malaysia and Thailand, and you can get a extra SIM card if you need one for another device.
But for me my company pays that anyway so I can get a new phone every year with basically free surfing in the whole EU, which is really nice.
It's not just the cost, but also the hassle of getting SIM cards, submitting documentation to get each (in India), paying multiple bills, tracking which plan you're on for each SIM card, and so on.
I don't understand why people don't just tether to their phones. It just works, and you don't pay any extra for another device on your account. Biggest downside I can see would be draining your phone's battery a bit.
I did that once, and I wished that I could plug my phone in to my iPad to charge it. That didn't work because the iPad uses Apple's proprietary Lightning connector rather than the standard USB-C. If my iPad had USB-C, I could have plugged my Google Pixel into it with the USB-C to USB-C cable Google gave me.
This is another limitation of the iPad — like a computer, but with arbitrary limitations like non-standard ports.
It still must be nice to be able to pull out your tablet-esque tablet and be connected to the internet anywhere you go. Especially as CAT-6 LTE is being rolled out in my city (Toronto) with comparable speeds as my home interest in recent years.
This will definitely become a must-have feature in future machines for me. I would no longer have to worry whether or not my local coffeeshop's internet is running slow or takes 5 minutes to properly connect to some random public wifi at a new place.
Public WIFI is prevelant but still very hit/miss. A consitent fast LTE connection would be quite the luxury.
It'd also totally kill SMS and telephone plans which I'm looking forward to the most. Encrypted E2E crypto for both voice/text will be a great incentive for this too.
The LTE is faster because your ISP is a douche. It is insane to ever, ever think transmission over airwaves can rival same-day wired capacity. They're just assholes.
Use your mobile phone as a hotspot. In the US, T-Mobile includes 10GB of hotspot in the unlimited mobile plans, so it was a no-brainer for me to go that route.
The OP and GP provide an example of the age-old dichotomy between deontological (kyledrake) and consequentialist (cjbprime) viewpoints. Which is more correct, has been debated for centuries and unlikely to be resolved anytime soon, as it's highly subjective.
Should Youtube, have refrained from using copyrighted material that contributed to their popularity, given it greatly increased their chances of success compared to the other nascent video sharing sites at the time?
Was reddit.com, wrong for using sock-puppets accounts to kickstart content when they were starting out?
Some will say it's always wrong to use 'x' in a strategy no matter what. Others would argue that while 'x' might be a bit evil, it's necessary to ensure survival, and will result in a greater good (ie the service being useful to millions vs extinction) in the long run.
GP suggested that IPFS might not want to openly circumvent a specific country's censorship system, and blog about it, and you're comparing not doing that to the use of fake accounts by reddit? The issue is not as complicated as you're making it.
Well, yeah. The consequentialist viewpoint is if IPFS fails to succeed because it jumped too early, then ZERO people benefit from it. That's arguably the greater sin.
There's a saying, cutting off the nose to spite the face. That could apply here.
The greater sin is not failing, as failing is expected on the road to success. The greater sin would be to not jump when the opportunity presents itself.
Maybe look at it from a different point of view, if to someone in Turkey trying to access wikipedia it makes a difference then it's already a success.
It's a question of where you want to dray the line, and maybe the consequentialist drawing the line when ipfs takes over the world and gets 3 billions daily users is far fetched.
Youtube is probably not a sound example here as posting a trove of copyrighted content was a deliberate strategy in the larger scheme of having youtube bought by a giant actor for a hefty sum of money by a group of people from the paypal mafia with experience in this shady business.
From the get go the exploitation of illegal content (and other shady tricks) was made for the sole purpose of personal profit and had no intent on making the world a better place.
In other words youtube has been evil from the start, with evil intent, using evil tricks as part of an evil agenda. There was no room for ethics there.
Unless proven otherwise, IPFS seem to not have such a nefarious purpose even to work towards a greater good, due to its nature will hardly ever be made up for sell and has to leverage different mechanisms due to being a protocol and not a website in need of registrations.
Wow, that's ascribing a lot of motivation to things which have multiple interpretations.
Napster was rarely categorised as 'evil' except by 'big content'. In the music industry there was compulsory licencing which enabled it's successor - Spotift. There was nothing similar in video.
This stuff can lead to race-to-the-bottom issues. Consequences are different on a micro or macro scale. The optimal answer is to figure out how to stop others from doing the dishonest approaches. But in a macro context where everyone else is cheating, it's a lot more fuzzy and complex whether it is right or wrong to cheat as well in order to compete…
Nice aesthetics and form factor, which makes the choice to use fabric right under the palms, more than odd.
Perhaps it will have strong sales given potential buyers rarely consider these long term issues. However, that fabric is almost certainly going to turn nasty over time and be a pain to manage.
It's like innovating by installing living potted plants inside a Ferrari to improve air quality. Cleaner air is a desirable goal, although the implementation could use another iteration.
Microsoft has been using the same fabric for the keyboard covers on the Surface tablet for at least the last two generations. It seems weird on a laptop, but they should have most of the issues sorted out by now.
While that's certainly a concern, let's not forget that Alcantara is very common in high-end sports cars. It's commonly used in endurance racing (24h of le Mans etc).
I have a full alcantara steering wheel in my Mercedes AMG. It's awful, and AMG Private Lounge forums will tell you the same story, it's not uncommon for people to literally go and swap their alcantara-clad wheels in their $100k vehicles for normal leather. Initially it feels amazing when the car is brand new, but you need to keep cleaning it every month or otherwise it becomes this dirty clump of neither pleasant leather, nor smooth material, it's just unpleasant to touch - and after a certain point it doesn't matter how much you clean it, it just looks like an old rag and the only thing you can do is have the wheel re-done. Having a laptop covered in alcantara sounds like the worst idea ever, probably only a notch better than having shiny black piano plastic on surfaces which scratch from just looking at them.
This struck me as really odd too. This is one thing I think Apple (and others) have gotten really right. Metal doesn't wear down much from regular use or resting your palms on it like plastic or fabric does.
Human brains are energy inefficient? Well, thats a first ;)
"In 1990, the legendary Caltech engineer Carver Mead correctly predicted that our present-day computers would use ten million times more energy for a single instruction than the brain uses for a synaptic activation."
"Last March, AlphaGo, a program created by Google DeepMind, was able to beat a world-champion human player of Go, but only after it had trained on a database of thirty million moves, running on approximately a million watts. (Its opponent’s brain, by contrast, would have been about fifty thousand times more energy-thrifty, consuming twenty watts.)"
In terms of energy consumed for individual computations, yes. Neurons use chemical reactions to communicate and this is terribly inefficient. Transistors use very small amounts of electricity in comparison.
The main difference is computer technology is designed to be very general purpose. The brain is more like an ASIC that's hardwired to run one specific algorithm. GPUs are also computing 16 or more bits of precision, when real neurons are very low precision. There are some other differences, like how real brains are incredibly sparse and most of the synapses at any given time are dormant and not using much energy. They are also very sparsely connected to each other. While our current NNs are very dense and need to spend energy to compute every single connection each cycle.
It seems premature to make judgements about efficiency when there is so much we do not understand about brain function and consciousness. When you can replicate all brain function, lets compare efficiency. Comparing to an asic reveals the source of your error rather than defends your position.
For that to be a fair comparison, wouldn't you need to look at all the energy consumed by the human brain over the many hours it took them to become a Go champion?
I think that's a fair argument, but from the quote above
> "Last March, AlphaGo, a program created by Google DeepMind, was able to beat a world-champion human player of Go, but only after it had trained on a database of thirty million moves, running on approximately a million watts. (Its opponent’s brain, by contrast, would have been about fifty thousand times more energy-thrifty, consuming twenty watts.)"
Let's say alphaGo trained for a year, that would be 1 MWyr energy consumed. And lets assume that Lee Se-dol's brain consumed 20W over 34 years of his live doing nothing but working on Go, that would be 640 Wyr, still a factor 1000-ish smaller.
Totally, I'm sure it's correct, and even if you were to bring the comparison in to line then the outcome is still "computer is watt hungry". The point is that the original statement, while correct, doesn't really say anything useful.
How would you know what amount of energy the human brain spent in learning to play Go, specifically? In the same time it was doing that, the human brain was also learning to do, and doing, a whole bunch of other things that AlphaGo was never even trained on- from moving limbs and controlling bodily functions to using language and recognising faces and so on. How would you isolate the amount of energy needed for training in Go, specifically?
I mean, in principle, if you had two numbers, "human energy consumption from learning Go" and "AlphaGo energy consumption from learning Go", you could compare them. But in practice there's no way to come up with such numbers, so what's the point of comparing apples and orangutans?
That's not really the point, more that it was originally not an apples to apples comparison and therefore doesn't really tell us anything. I have no doubt that the statement is correct, it's whether or not that statement has any meaning. As another comment pointed out, even if you compared the hours, the computer still uses a few orders of magnitude more energy for a more accurate (although completely theoretical) comparison.
The gain is in being able to clone the bot perfectly. Once trained you can make many of them. Also, if you look at what happened in Chess, the lessons learned from the large machines was absorbed and resulted in your smartphone now being able to outclass the human world champion.
You can expect a similar thing with Go at some point.
Same goes for the bot, then. A back of the envelope calculation suggest Lee's brain consumed as much energy in a 80 year lifetime as AlphaGo in half a day.
Not trying to say it isn't a correct statement, or that the outcome would be different if you lined everything up properly, only that the original statement doesn't really say anything meaningful.
For one, normal human can do long division as fast as a calculator, and can handle numbers that will bork many calculators. (edit - look at human calculators, and the era before calculators were common place. Even now elders I know can eye ball numbers and calculate percentages / factorials and ratios)
And for another, Calculation != AI, far from actually.
>In 1977, at Southern Methodist University, she gave the 23rd root of a 201-digit number in 50 seconds.[1][4] Her answer—546,372,891—was confirmed by calculations done at the US Bureau of Standards by the UNIVAC 1101 computer, for which a special program had to be written to perform such a large calculation.[10]
She could easily out-perform calculators because she never needed time to key in the commands (she needs to hear the problem to solve it).
If we exclude that restriction, and the commands magically float into the calculator, and that the problem is small enough to match the calculators limits, then yes, if those arbitrary conditions are met the calculator can out-perform her brain.
Which is precisely the type of “cows are round spheres” thinking that’s being decried in the article.
People can and regularly do out-perform calculators in speed, energy and complexity of computation.
Do note that calculators weren’t allowed as exam tools in a lot of countries till a decade or so ago. Students learnt mental math techniques which were known since ancient times (think Greece).
For a human brain the answer isn’t even calculation, it becomes pattern recognition. The square root of 25 is 5, which takes about the same neural load as it takes to recognize a letter.
The calculation you provided is harder, but thats a function of lack of training/practice, not complexity.
----
AI is not in the realm of what a calculator can pull off, is what I meant to say by the compute part.
edit: I tried your computation on a store calculator, its beyond its ability to calculate,(0.0000000027)
Your example is from 1977, we've had 40 years of Moore's law since then. In the time it takes for you to recognise that you're even looking at a number (~0.08 seconds), the cheapest computer you can buy (the $5 Raspberry Pi Zero) can do around 1.92 billion normal floating maths operations. Sure, 201-digit numbers are a little slower — on my laptop, in Python, I can only do that particular calculation just under one hundred million times in the fifty seconds it took her to do it once.
But you're right to say calculators are single purpose devices and that's magically inserting the question.
So I downloaded sklearn, which contains a set of labelled hand-written digits.
It takes about 0.17-0.2 seconds for my laptop to learn to read numbers, from scratch, and thereafter it can read digits at a rate of about 8,200 per second.
For reference, "a blink of an eye" is 0.1-0.4 seconds depending on who you ask.
Dollar store calculator? I'd never have said that myself because I know calculators are rubbish. But in the context of "AI: Will it ever beat humans?" hell yes, it will and in many cases it already does.
There is the practice of anzan in China and Japan, which shifts the burden of calculation into the visual cortex by visualizing a mental abacus. Thus advanced practitioners can rapidly and correctly evaluate calculations like the one you have given. As you can see, Tsujikubo's speed in divisions appears to be limited in her ability to physically write out the answer.
Granted, these are people at the top of their game, so may not qualify as "normal" per your argument, but this is to illustrate that the limits of numerical calculation in "normal" people may be the algorithms we are taught rather than the computational capacity of our brains.
and he only can play Go... what a waste for a such big AI. AI can beat humain on some special part but they are designed by us, then they are wrong and bad, specially when you only need to unplug the battery for that they die, too easy, come on AI do something more than that....
The cell carrier oligopoly companies are cheap, and would rather use their scarce and immensely valuable resources (bandwidth) to provide new services and open new revenue streams, not boost the quality of existing services which pretty much everyone already pays for.
And, as with many other things, the revealed preferences of many people is that they don't care much about quality of the call relative to other things.
I was just having a conversation with someone earlier today about how the average quality of most of the voice calls we make is far lower than it was decades ago on Ma Bell landlines. Of course, calls are much cheaper now, we have mobile phones, etc. Nonetheless...
Sprint's early on ads about hearing a pin drop on their fiberoptic network seem really quaint today.
Which, even stipulating that is the case, is not what most people use. Instead many phone conversations involve "Wait, are you still there" "Can you hear me?" [Shouted] "What?" No, we've most assuredly given up universal quality for convenience and lower prices.
We CAN provide high quality but we mostly don't want to make the tradeoffs to do so.
Opus has lot of variables, but if you could get 53k connections consistently with a modem, you were probably getting quality slightly under G.711 (because of robbed-bit signaling and any analog line noise). PCM encoding has a lot less latency than Opus, and circuit switching means essentially zero jitter. If you don't mind adding some latency, and can control jitter, Opus can handle wider bandwidth audio, and is much more bitrate efficient.
Sure they do. People are ditching landlines to save money even though making do with just a cell phone often gives significantly worse quality. (Which was actually the context of the discussion I mentioned. I was debating giving up my landline--well, Xfinity Voice--even though just using my cell service would result in poorer average call quality.)
That isn't true at all. Most US networks, and may others globally support VoLTE. The difference in sound quality is remarkable, to the point of being unnerving at first.
In order to make an end-to-end VoLTE call, everything needs to support it and have it enabled: the handsets, the network(s), call handling equipment, interconnections between networks, etc.
You can tell if your handset and network are enabled by making a call while on LTE; if your network indicator drops LTE when you make the call, it's not supported and enabled. Most carriers have a help page with a list of supported devices and how to enable it. I don't know if there's any good indicators that you have an end-to-end enabled call, unless you can tell from the sound quality.
Isn't it still possible to make a low-audio-quality call over the LTE _network_ (i.e. showing LTE when you make a call) due to negotiating down to older codecs if one of the other elements of the connection doesn't support higher-quality codecs?
I don't have them both together yet, I but I will probably update this comment later today. I now notice that I might have been premature, though, as where I am doesn't currently have LTE coverage.
I know you're joking, but if you haven't had the experience of using Skype on a smartphone with someone else, it's worth the hassle. It's incredible how crystal-clear the audio is. It sounds like you're next to the person.
Skype has the best voice quality, followed by land lines, and cell phones are dead last. Cell phone voice quality hasn't improved since my first cell phone in the 90s.
I frequently lately have had to have people call me back on my cellphone because the Skype call quality has been awful. Skype seems to be much more dependent on network conditions.
The fact that you offer a real (i.e. full-featured) free plan is really nice. Congrats on your success and finding a business model that works. :)