Ask Apple, who hide it from you in Xcode and riddle it with human-unreadable generated IDs and noise elements, in addition to needless reshuffling of elements after editing which overcomplicate your diffs and obscure the meaning of changes. Interface Builder in all its forms is indefensibly bad in execution.
Idempotency is a good example of a trivia term that would be unreasonable to expect most recent CS graduates to know. Not to say that it's not a valuable property for a service or library or whatever to have under certain circumstances-- but it's not a reasonable way to decide whether a candidate for a junior position is a good fit.
Huh, I did a diploma in computing (UK) in my spare time about 12-16 years ago and remember covering it in a databases module (and possibly when doing OOP with Smalltalk?) - I'd go with "it's about repeatability, knowing that sending the same request, eg SQL query, will return the same results".
That's a little off the mark, as I've checked, but I'd think a full-time CS (as opposed to programming or computing or IT) student would be able to define it very readily ...?
They're promoting this as a new dev environment for .NET Core, but there's still ZERO tooling for Razor. I tried starting a simple example project and the .cshtml files didn't even have any syntax highlighting, let alone syntax/type checking.
I don't know how you work on cross-platform ASP.NET for this long and still not have the tooling for your templating engine ported.
It seems like the "ASCII puke" concrete syntax for regex patterns doesn't scale that well. Regular expressions have binary operators, parenthesization, named groups, lookaheads, etc.-- if you're building a sophisticated regex of more than 10 characters or so, why not have some kind of an object model for this stuff so you can have reasonable forms of composition, naming of intermediate values in construction of a larger pattern, and the ability to attach modifiers to things without needing to pack more @#$%&*!^ un-Googleable junk into string literals?
Icon (and SNOBOL before that) had an alternate syntax that was more verbose but more readable.
s := "this is a string"
s ? { # Establish string scanning environment
while not pos(0) do { # Test for end of string
tab(many(' ')) # Skip past any blanks
word := tab(upto(' ') | 0) # the next word is up to the next blank -or- the end of the line
write(word) # write the word
}
}
I should implement something like this in ruby some day... (I did it in java long ago, but this was before you just threw things up to github and I have long since lost it.)
For those who haven't seen the language before, I think it's also useful to know what expression evaluation in Icon works using a recursive backtracking algorithm. This means that the most natural way of writing a string scanning parser (like the one above) more or less automatically gives you a recursive backtracking parser. Like ebiester, I too have found it to be a nice way to do certain kinds of simple string parsing.
The regular expressions in Ruby, Perl and Python supported "extended syntax" for a long time (x flag). For an example, see the first first section http://www.perl.com/pub/2004/01/16/regexps.html (this is perl , ruby is very similar).
I think the article would be easier to read if all the regexes were in extended form, but I suppose that author is an expert regex user, so the examples were easy enough for him.
And finally, perl6 totally re-did text matching with "grammars" (https://docs.perl6.org/language/grammars.html) -- they use much more readable syntax, nameable groups, etc... It really is quite a wonderful thing, I with it was available in other languages.
Not so much an expert, but I did Perl for 8 years before 13 years (so far) of Ruby so a lot of exposure(!) I've not been a fan of extended syntax but it might be worth me giving it a proper go and writing something up if it's helpful so thanks!
Here's ebiester Icon example in the parse dialect:
s: "This is a string"
parse s [
any " " ;; skip past any leading blanks (none in this example!
any [ ;; repeat ANY while keeps matching (true)
copy word to [" " | end] ;; next word up to next blank or EOL
(print word) ;; print word
skip ;; skip past blank
]
]
I have replaced most regular expressions with irregexes http://synthcode.com/scheme/irregex/ . They are sadly scheme specific, but does a lot for clarity.
For anything more complex than small things that can be understood in less than 20 seconds, I use a parser generator, be it parsack (racket version of Haskell's parsec) or whichever I have at hand.
Personally, I'm fine with it—but only because Regexps really belong to a different domain than people think.
Regexps do not exist to be a self-documenting syntax for writing code that gets read and maintained. If you are going to sit down, write, debug, commit, and PR some code that matches strings, for heaven's sake just write your pattern in BNF and apply a parser generator to it, or use a parser combinator library.
Regexps are intended as a fluent syntax for interacting with data. Regexps exist to be arguments to sed, awk, and vim's :s command. Regexps exist to let you type an SQL query into psql that finds rows with columns matching a pattern. They're meant to be a hand-tool, used by a craftsman during the manual work of analysis that comes before the job is planned.
And as such, regexp syntax features aren't meant to be composed into multi-line monstrosities that do all the work at once; they're meant to let you match chunks, and then pipe that to another regexp that winnows those chunks down, and then another that substitutes one part of each chunk, etc.
If you've ever seen a PERL script written in "imperative mode", where every line is relying on the implicit assignment of its result to the $_ variable, each line doing one more thing to that variable, each little regexp sawing off one edge or patching one hole—that's an example of the proper use of regexps. Such a script is effectively less a "program", and more simply a record of someone's keystrokes at a REPL.
And because of that, I honestly find it a bit strange that modern compiled languages build in "first-class" native support for regexps. They make sense in "scripting" languages like Ruby and Python because those languages can indeed be used for "scripting": writing code in their REPLs to do some manual tinkering, and then maybe saving a record of what you just did in case you need it again. But in languages like Go or Elixir? Why not just give the developer a batteries-included parser-combinator library instead? (If you, as a developer, need to parse regexps to support your users querying your system by passing it regexps, they could still be available from a library. But there's no need for a literal syntax for them in such languages.)
That being said, I wouldn't mind if an IDE for a particular compiled language accepted regular-expression syntax as a sort of Input Method Editor: you'd hit Ctrl+Shift+R or somesuch, a little "Input regexp: " window would pop up over your cursor, and then as you wrote and modified the regexp in the window, the equivalent BNF grammar would appear inside a text-selection at the cursor. That's a good use of regexps: to allow you to fluently, quickly create BNF grammars. As if they were a synthesizer keyboard, with each keystroke immediately performing a function.
I don't see why there can't be React-like libraries written and used in languages that compile to native. I'm not expecting to have JSX but I should be able to write component classes and implement their render methods, returning view trees written with some kind of object/array literal syntax.
To get as good of a development experience as React, it would require some work by the compiler and runtime people to basically let you do something like hot loading--Android has something like this now, and maybe Apple will get it too, though I'm not holding my breath.
I think it's a no-brainer for web development these days to do React because 1. you can opt out of it for parts of the page it's not going to work with for whatever reason and 2. the performance is pretty damn good compared to lots of alternatives, including writing all the UI state management logic yourself. However, I've not been convinced that the buy-in is worth it for native mobile development. Can someone who knows more tell me: is it fairly easy to do something like say "I can't/don't want to use React Native for this view controller--I'm going to implement it in code and use it and everything will just work."
Companies with large, existing apps that want to use React Native definitely share the desires written out in your last paragraph. Facebook has this need in their main app. And at React Conf today, Leland Richardson from Airbnb just gave a talk on using React Native in "brownfield" apps and seamlessly sharing a single navigation controller across React Native root views and UIViewControllers and Android Fragments. The navigation library that helps a lot with this is here: https://github.com/airbnb/native-navigation
GP means Visual Basic, probably as in the VB macros that you can write within Word/Excel/Access etc., in which there is surely a huge amount of code written by otherwise non-programmers.
It's absurd to have to beg the wealthiest software company in the world for what should be considered really basic stuff. Xcode is consistently unstable, slow, missing simple essential functionality (like refactoring), and Apple's interface builder is something that most experienced Apple devs know to run for the hills from.
I remember a few years ago, at a WWDC session on Xcode, the presenter was talking about version control improvements.
He said something to the extent of "Xcode has a robust version control system" and the crowd laughed. And the presenter got offended, said it "wasn't nice" of the audience to laugh considering how hard-working the Xcode team was.
My recollection is blurry so it would be nice if someone else remembers this too.
But if that's really the attitude within Apple or the Xcode team, don't hold your breath for improvements.
Edit: changed my paraphrase of the presenter. since I remember it a little better now.
Well gee they are working so hard so we shouldn't criticize obviously. Golly all that hard work, so nice that it exempts them from derision for their substandard product.
The videos from many WWDC sessions are altered. So even if you got the right session it may not have this. I've been in sessions where much less interesting this have happened that haven't made it in to the video. Also last WWDC they started pre-recording sessions so they could be up on the web sooner. Many of the video sessions from last year don't have any audience noise and differ significantly from the live session.
Apple is not a big company. Out of the 116k employees they have 60k are in retail [1]. Another 6k are in AppleCare call centers [2]. So about 50k of them are at corporate. Contrast that with Google which has 72k employees [3] and Microsoft which has 120k employees [4].
Apple's organizational culture is meant to be about small teams. Steve Jobs once said "we're the biggest startup on the planet" [5]. This leads to complaints about various neglected features or products and calls for Apple to hire more employees or spin off divisions so it can have dedicated resources. Just like people will always complain about the quality of service at airlines, Apple watchers will always complain about whatever pet issue they feel isn't getting enough attention. That there are complaints doesn't in and of itself mean that Apple needs to change their organizational culture. These complaints always miss the opportunity cost of the changes they suggest: that Apple is Apple because they are resource constrained.
Let's break that down across everything those 16k software engineers are responsible for:
• The OS kernels, drivers, and frameworks of macOS, iOS, watchOS, and tvOS
• The base-system software on all of those OSes, including rather involved apps like: iBooks, Safari, Mail.app, iTunes, Photos.app
• "Apps by Apple" like iWork, GarageBand, Pages/Keynote/Numbers, Final Cut Pro, Logic Pro X, iBooks Author, and, yes, Xcode
• Server.app (which adds to macOS the kind of enterprise domain-management + provisioning + MDM tooling that Windows gets in its Server releases, but also includes extra stuff like Wiki software, Xcode build-bots, and VPN management)
• Firmware + macOS drivers + Windows drivers(!) for Apple hardware (keyboards, mice, touchpads, headphones; I bought one of those MacBook USB-C multiport dongles recently and it did a firmware update, so apparently it has firmware too)
• Firmware and operating systems (usually NetBSD-derived) for "appliances" like the Airport/Time Capsule [though at least this has been dropped]
• Sponsored work on open-source projects (Webkit and LLVM being the two big ones) and standards (the Swift language; the Bonjour protocol)
• iCloud backend services: this includes the "obvious" things like the object store behind iCloud Drive and the per-app iCloud CoreData syncing servers; but also includes:
• • Apple's own maps service to back Maps.app
• • the iTunes store and App store (both in web and app form)
• • the Apple Music / "iTunes in the Cloud" sync servers
• • iCloud PIM support (mail, notes, calendars, reminders)
• • the FaceTime and Messages.app servers
• • Siri and Dictation (and you likely won't believe just how many languages Apple has built well-trained speech models for)
• • the Apple website / Apple Store + Apple Support apps
• • Xcode "development team provisioning" servers
• • webapp versions of iWork and the PIM apps (go look at icloud.com)
This looks like a reasonable list, however I think the overarching view is that with all of Apple resources (money, etc.) they should be able to point some of them at XCode. This may involve hiring developers or shifting priorities from other projects. Either way, it's something that most developers feel is necessary and good.
Then again, maybe this is another way of saying that Apple really doesn't care about professional programmers and their needs. Similar to the feedback around the latest MacBook Pro specifications and the lack of movement in the Mac Pro and Mac Mini machines.
I'm not sure what flaw you found in what he wrote, or why even ask.
Obviously it's not the absolute number (50k) that counts, but how it's distributed. (And those 50k are not even all programmers).
If you do an OS, a mobile version of it, an embedded version of it, your own language, several huge SDKs, your own mail app, your own calendar app, you own spreadsheet, your own word processor, a TV appliance, the biggest mobile app store on the planet, your own logic board and CPU designs, the biggest music store on the planet, another large desktop app store, your own DAW, your own NLE, your own compositor, your own broswer, your own javascript engine, your own AI, your own Maps, and several other things besides, then no, "50k" might not be enough.
The flaw is that when you have 50k employees, you have much greater flexibility and resources with regard to human resources (and likely almost every other resource) to move around than a company with 200 employees.
Additionally, we are invited to "Contrast that with Google which has 72k employees". So, Google, which has almost exactly 20% more employees (after removing all retail and help center Apple employees), is supposed to be so much different, and an example of a large company? At least the comparison to Microsoft has more than a doubling of employees.
The argument that Apple works the way they do because they run their teams lean is fine, but let's not start acting like they're not a large company just because their culture is intentionally different in some aspects.
Edit: To be clear, by objection is to the premise "Apple is not a big company." which was the leading statement of the comment I replied to.
A company with 50,000 employees in its corporate office(s) is huge. Saying another huge company is a bit larger and therefore this company is no longer huge is nonsense. Also, that a common range for defining "mid-sized" companies is under 1,000 employees should hint that Apple is beyond large. Here's a nice summary on how SBA and some companies classify other organizations:
Businesses with thousand or more employees are so rare as percent of the whole that they deserve their own category anyway. As in, what you say about them wouldn't apply to businesses in general and vice versa.
They have the most cash reserves because they're (a) incredibly profitable due to product demand and monopolistic practices (i.e. patent suits); (b) stockpiling cash. It's that simple. There's plenty of things they could be improving at Apple or just investing in. They're hoarding instead. They're not alone: many of the greediest, richest, and shareholder-oriented companies are doing the same thing while stagnating.
I'm not sure the best way to measure and compare, but it sure seems like Apple focuses a lot more on a narrow set of products than either Microsoft or Google.
Apple might be vertically integrated to a higher degree than Microsoft and Google, but Apple doesn't have as big of product diversity.
I was all-in with six. Three people to write the code. Two people to write the documentation and otherwise interface with the public. One person to answer all of the emails and attend all of the meetings and make sure nobody interrupts the other five people.
If you have headcount for "testing" then I don't want to use your software. Tests are integral to coding and should be written first.
Grandparent's not talking about unit tests. A good human tester can be very useful in finding the mysterious edge cases where the bugs roam, without wasting your developer's time doing the same.
If you have headcount for "testing" then I don't want to use your software. Tests are integral to coding and should be written first.
Yeah, if everything were synchronous, deterministic, linear, and non-interactive, life would be so much easier, and test-driven development might actually work.
Do you use Apple software? I understand they are pretty big on manual testing.
Your process description doesn't sound anything like how I understand products are developed at Apple. Where's the headcount for the designers / UI specialists? Are you counting them as engineers?
It seems like what was once their strength is now becoming a liability. They seem to be spreading themselves too thin and more and more people are complaining about more and more things. Since more customers are unhappy it seems like they either need to hire more people or cut products.
I thought they were already cutting products. Apple displays: gone. Airport: gone. Mac Mini and Mac Pro haven't had updates in so long they should be gone.
I've noticed this effect recently - I call it "too big to try."
Once a given institution reaches a certain scale, the apparent limitations on human attention at the top of the hierarchy make it impossible for the organization to contemplate small ventures. Like the parable of Bill Gates finding a hundred-dollar bill on the sidewalk, it's no longer worth the time to stoop to pick up the small stuff.
(Intuitively, this seems related to the absurd inflation in the cost of public works in the US over the last century.)
Definitely. To me, it's a symptom of control-oriented organizations, where too much information processing has to take place at the top. As a contrast, support-oriented organizations work to keep most decisions happening lower down.
It's especially frustrating here because the business case here seems pretty simple: go make these developers happy and effective. It's a known audience, they're easy to reach, they're not shy about telling you what they want. I don't think a lot of information needs to get to the top of the hierarchy.
Indeed, Apple has the high-tech equivalent of "dragon sickness", in a nod to Tolkien.
Google has a long road ahead of it, but it looks like they have the right pieces in place where I see small, incremental improvements each year, so maybe they're the tortoise. Milestones on that long road are like migrate away from Dalvik, fix business model / monetization issues with the Android marketplace, switch to vector-based canvas blitting, rationalize API support for different manufacturer-added features, and so on. What would be interesting is if they steal Apple's developer thunder by capitalizing upon open source and their in-house build system, and out-flank Apple's developer mindshare, by creating a developer-oriented ecosystem.
Imagine if you could hook up your own Docker container that Google's build infrastructure then taps to build your Android app...but all the open source your app depends upon are in their build infrastructure, with near-instant feedback on build and CI problems of the open source bits operating at a massive scale. App development shifts to a posture where open source frameworks/modules/libraries that already power a lot of software are orders of magnitude more convenient to develop under this ecosystem, and the agility/efficiency of all those Android developers coalescing around common open source components online in a single build and CI ecosystem far outstrips Apple-based developers stuck with XCode and their own person-oriented toolchains. Apple has nothing in the pipeline remotely like that kind of ecosystem. Google would also get big data-based insight into phone app development trends in real-time that Apple could only dream about. Google's phone app development OODA loop would tighten considerably smaller than Apple's.
I also wonder if Google and Microsoft could find benefits to team up to replace Dalvik with CLR, and then Microsoft Visual Studio becomes a first-class citizen on Linux for building CLR-based apps on Android.
Um, Google has actually worked very hard to de-couple parts of Android into separate apps, so that those apps can update without a carrier-managed total OS upgrade.
This has arguably negative impacts on Android's utility as a non-Google OS, but there's no question that this strategy was to help push updates to users faster.
> Imagine if you could hook up your own Docker container that Google's build infrastructure then taps to build your Android app...but all the open source your app depends upon are in their build infrastructure, with near-instant feedback on build and CI problems of the open source bits operating at a massive scale.
But... why? Gradle already does a good job eliminating works-on-my-machine-isms, that sounds like cloud-for-the-sake-of-cloud.
> I also wonder if Google and Microsoft could find benefits to team up to replace Dalvik with CLR, and then Microsoft Visual Studio becomes a first-class citizen on Linux for building CLR-based apps on Android.
> Gradle already does a good job eliminating works-on-my-machine-isms, that sounds like cloud-for-the-sake-of-cloud.
Github for build and continuous delivery/deployment/integration; extremely distributed build. Today an app developer pulls down the latest version of an open source library and building against it, then files any integration issues against the library's ticketing system. Google's system allows the library developer to build a version, then find everyone whose apps that use their library break because of the new proposed version. It vastly speeds up the delivery cycle and increases robustness between builds of all your dependencies and your app.
A lot of people really like the re-factoring and IntelliSense features in Visual Studio, but hate working under Windows. The new Linux compatibility push under Windows when it matures may accomplish the same as making Linux a first-class citizen in Visual Studio.
Google trying to further develop Dalvik/Android Runtime for Android seems to me eerily like Sun developing further generations of SPARC. Google would have to put up a really big war chest to continually find and address all the edge cases to sustain a process virtual machine going forward into the future, and I'm not clear where the value proposition lies in doing so, rather than settling upon an existing process virtual machine with more developers working upon it. I suspect Google does this because adopting someone else's process virtual machine, even an open sourced one, risks that someone else somehow strategically chokepointing Android development in the future with incompatible changes.
> Google's system allows the library developer to build a version, then find everyone whose apps that use their library break because of the new proposed version.
That assumes that library authors can see the code of dependent applications. Ehhh...
> It vastly speeds up the delivery cycle and increases robustness between builds of all your dependencies and your app.
Presumably by breaking repeatable builds? I certainly don't want it to replace libraries without my knowledge, and that's the only part where I can see it possible "speeding up the delivery cycle".
Oh, yeah, Gradle can do that anyway with -SNAPSHOT dependencies.
> A lot of people really like the re-factoring and IntelliSense features in Visual Studio, but hate working under Windows. The new Linux compatibility push under Windows when it matures may accomplish the same as making Linux a first-class citizen in Visual Studio.
Tried IntelliJ/Android Studio? Especially considering that ReSharper, which is more or less a port of IJ's refactoring, is usually considered a must-have add-on for VS.
> Google trying to further develop Dalvik/Android Runtime for Android seems to me eerily like Sun developing further generations of SPARC. Google would have to put up a really big war chest to continually find and address all the edge cases to sustain a process virtual machine going forward into the future, and I'm not clear where the value proposition lies in doing so, rather than settling upon an existing process virtual machine with more developers working upon it.
Sounds like migrating to OpenJDK would be a much more reasonable path in that case, since it wouldn't mean starting over in third-party application support.
>That very wealth is what's insulating them from the long-term reality of the choices they're making. It's a trap!
What "long term reality"? They have been doing this when they were near bankrupt and continue to this day, 20 years later, and they are now the richest company on the planet.
The one that's been getting me lately is the split between Xcode 7/Swift 2.2 and 8/3. We have an older project in Swift 2 (yes, it's slated to get changed over, just not yet), and new work on a project in 3. If I try to open one while the other is already open, one of the Xcodes invariably freezes or crashes.
I will say, though, that I've always liked IB (though not storyboards). But then there's the wonderful "Oh, you opened a nib, I should move something around in the XML." -> SCM status changes even though I didn't modify the file.
In response to your problem, having done the Xcode Old/Xcode New dance every year since 2012, I got into the habit of making sure they are never running concurrently. Things get even worse if you run xcodebuild on the command line while a different UI version is running.
You can't run multiple versions of Xcode concurrently due to CoreSimulator fighting over which version gets to run. This is a limitation we are aware of. We are also very aware of the problems it causes.
As for other problems, please file radars and respond to requests for additional information. I have been on the external side of radar, I know it can be frustrating, but we do read them and we do take direct action based on them. Even duplicates are very useful.
What? I have very little problem running Xcode 7 and Xcode 8 along side each other. As long as you close the simulator before running from the other version.
Only one CoreSimulator service can be running at a time because only one can be in control of the devices and the database. If you try to keep two versions of the tools open they'll keep killing each other's CoreSimulator or one will get stuck with the wrong incompatible version.
CoreSimulator is used during builds and for IB's accurate rendering feature. Instruments and Console open connections to CoreSimulator for various purposes. A lot of things can break.
> having done the Xcode Old/Xcode New dance every year
There didn't used to be this hard division between Xcode versions because of the compiler. If you were willing to do some reconfiguring, you could use newer GCC or "Apple LLVM" with the older Xcode.
I'm not sure why it hasn't been done yet; I've only been on the team for two months now. That would indeed be lovely. But the plan is to have one of our contractors move the entire project up to Swift 3. I don't know exactly when, though...
The file changing just from being looked at is one of the basic reasons that IB is bad. Apple made a grievous mistake in designing UI markup that is intended to be hidden from developers. Un-organizable, un-diffable toxic sludge.
It boils down to budget. The app ecosystem simply isn't a high priority because it makes little to no money relative to Apple's core business. Matter of fact the best thing they did was introduce ads to app search, at least now they're probably making a few more bucks.
They invest most of their resources into hardware and critical software. Everything else clearly is secondary.
It doesnt take much to form this opinion either, just look at the quality of their software overall. Very inconsistent, trending towards mostly stable. Design improves which is good I guess but not what developers need.
They don't make a ton of money from independent developers and it doesn't matter anyway because most of the heavily used apps are created by Apple themselves.
I'm only just getting into Apple development, and the course is using XCode and the interface builder. Could you tell my why I should avoid it and what approach to take instead? Or is it fine or even preferable to use it in the 'learning phase'?
Apple should start asking their software engineering candidates to invert red-black trees in interviews. I hear that makes these kinds of issues unlikely.
Yea, but iOS developers make twice as much as Android developers. Lots of customers love the walled garden and those customers spend a lot more on apps than Android customers do (Android installed base is multiples of iOS, yet revenues are double on iOS).
And one of the reason the bigger spending customers are with Apple is the benefits of the walled garden. It's more secure, and it's updated far faster (or at all). For developers it's a better environment to develop for, consistent screen sizes and hardware features and I know I can write my latest app for the latest iOS and within months of it's release the vast majority of revenue producing customers will be on it. That's a big reason why better apps are written on iOS first, it's easier.