RSS feeds are actually used quite a lot in the podcasting industry.
And indeed, nothing guarantees that you won't get ads in those podcasts, because ad insertion is done server-side in a lot of cases, so the media URL in the RSS feed will be served with ads in it.
Sweet childhood :D!
So much nostalgia was tied to this game, that beginning the pandemic I thought I should have a go at creating a virtual pet game for Garmin touchscreen smart watches.
It's nowhere close to the actual Tamagotchi games, but it was a nice project.
I was really tempted to buy a new Mac but there are still lots of gotchas my colleagues had to solve, so I decided to wait. Pushing ARM64 to developer machines still seems like a rushed decision imo, and it definitely shifted resources into building and supporting ARM64, resources that Apple doesn't necessarily pay for.
At the end of the day, for me it matters how fast I can do a certain task and how much spare time I have left after I finish it.
I've been on M1 since last year (Switching from a 2015 MBP) and while there have been some issues, the transition has been going fairly swiftly. The main one I had to deal with was some google library embedded inside React Native Firebase. But the fix was fairly easy (setting XCode and Terminal.app to open using Rosetta). My Mac Mini is enough for daily work and my MBA is perfect for personal uses, never getting hot, even with multiple applications opened.
I still wonder why we don't see wide spread standardized battery packs placed in standardized housings, that could be swapped faster at existing gas stations with minimal infrastructure. Ideally, swapping would take relatively the same as filling a tank of gas and would be done by an automatic system. I've seen some videos of NIO doing this, but I'm pretty sure there's no standardized solution for the European market.
I think this model could work if car owners don't own those replaceable battery packs, but they just pay for the energy inside them instead. I guess it would be easier to install 1 automatic ramp in a gas station that services a client in < 5 minutes than to build charging stations that service clients in hours.
LE: not referring to the whole battery capacity, these packs could offer ~100km range depending on the vehicle.
Most people don't need to fast charge most of the time, so it never really made sense compared with gradual improvement in battery size/cost and fast-charging, and greater availability of slow charging at places where cars park anyway.
Gogoro scooters used this model, and it might still have niche uses, but generally not as good as our current solution.
NamedTuple works great for a lot of cases, but not always. For example, when dealing with attribute defaults for mutable collections.
Dataclasses have a `default_factory` that is used in these occasions.
> I write code in order to express myself, and I consider what I code an artifact, rather than just something useful to get things done. I would say that what I write is useful just as a side effect, but my first goal is to make something that is, in some way, beautiful. In essence, I would rather be remembered as a bad artist than a good programmer.
This entered my list of favorite quotes! For this, if not for your huge contribution to OSS, grazie!
Well, given that they didn't even use pythonic constructs, I'm not quite sure what to think of the article:
In [1]: import random
In [2]: r = [random.randrange(100) for _ in range(100000)]
In [3]: x, y = random.sample(r, 1000), random.sample(r, 1000)
In [4]: %timeit z = [x[i] + y[i] for i in range(1000)]
106 µs ± 1.28 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [5]: %timeit z = [i + j for i, j in zip(x, y)]
67.3 µs ± 3.38 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
(under python 3.6.3)
For those that "don't see it", what I'm seeing is that, instead of looping over a zip of iterables, like they should be doing, they're using an "index range" to access each element by index -- not the best practice, and also results in a noticeable slowdown.
Edit: And for the curious who might think they're different result:
In [6]: z1 = [x[i] + y[i] for i in range(1000)]
In [7]: z2 = [i + j for i, j in zip(x, y)]
In [8]: z1 == z2
Out[8]: True
Final edit: all in all, I'd say this is a low effort post, aimed at gathering attetion and showing "look how good we are that we know how to speed up python loops using numpy" (/s)...
Short answer is: because Python 3 already benefits from a better, more complete client: aioh2 (https://github.com/decentfox/aioh2) that works with Python's asyncio.
th2c is mostly intended for Tornado backends, trying to keep a similar interface and compatibility with Request / Response objects.
It was initially developed in order to be able to communicate with APNS from a python 2.7 environment.
Hey, I posted an issue on the repo for a Python 3 port here: https://github.com/vladmunteanu/th2c/issues/27. I think you should give the same explanation there as you did here and close it outright as what you said makes sense. There is still a lot of Python 2.x code being written out there with companies looking for quick ways to upgrade their stack. While I wish it weren't so, your point makes a lot of sense.
It's still a reasonable starting place for anyone wanting asyncio on vanilla Python2. Outside of vanilla Python2, Tauthon also has some tracking issues for the language extensions and libraries needed for a backport of asyncio to Tauthon.[0]