I've found a debugger particularly useful when trying to understand details about other people's code, when changing the source code is not necessarily an option. They can quickly give me answer to how the call stack looks like when a function is called, and let me inspect variables in different frames of the stack.
I've encountered a code written in the 12factor style of using environment variables for configuration, and in that particular case there was no validation nor documentation of the configuration options. Is this typical?
For onboarding new members, I would have thought it preferable to have a JSON configuration, where both documentation and validation of configuration options are provided by a JSON Schema file.
> I've encountered a code written in the 12factor style of using environment variables for configuration, and in that particular case there was no validation nor documentation of the configuration options. Is this typical?
This just feels like bad development and isn't unlike being given a random .properties/.ini file with no explanations of what the values mean. Sounds like someone didn't do their job, or the processes to encourage (require) them to do so weren't in place.
> For onboarding new members, I would have thought it preferable to have a JSON configuration, where both documentation and validation of configuration options are provided by a JSON Schema file.
You know, this can work, but then you need your applications to be able to read that file and feeding it in through your container management solution (which many use for a variety of reasons) wouldn't be as easy. Even without containers, you'd still need to look out so you don't end up with 20 JSON files all of which might need to be changed for a new environment.
Honestly, something like JSON5 https://json5.org/ was pretty cool because of added comments, but otherwise JSON is a bit cumbersome to use. That said, some web servers like Caddy have gone for accepting JSON configuration as well, which lends itself nicely to automation, so it's still a valid approach: https://caddyserver.com/docs/json/
> I've encountered a code written in the 12factor style of using environment variables for configuration, and in that particular case there was no validation nor documentation of the configuration options. Is this typical?
I think it comes down to how your team values the code they write.
You can have a .env.example file commit to version control which explains every option in as much or as little detail as you'd like. For my own personal projects, I tend to document this file like this https://github.com/nickjj/docker-flask-example/blob/main/.en....
Check the WCAG success criteria that your jurisdiction requires. One less obvious, and commonly required, criteria (WCAG 2.4.1[1]) is that the site has some mechanism by which a screen reader user can jump from the beginning of the document to the main content, without having to browse through the preceding navigational links and similar.
I personally don't see having the version in the package address particularly more nasty than specifying it in the project file, as in Maven, which I have been dealing with lately. One problem I imagine is that when migrating to a new version of a library, the address has to be updated in each file the library is used in, with potentially catastrophic consequences if any one file is forgotten. A grep seems wise in that case.
Would you argue that there are more robust solutions in the Haskell ecosystem? What scheme do you prefer?
I don't think the issue was with specifying the version in the package address. It is that you need to use a 3rd party, remote service, to redirect your requests to the appropriate git repo and branch.
This is because Go doesn't support specifying branches/tags itself when specifying a git dependency.
This workaround is contingent on gopkg.in people keeping their service maintained and running. If they went down, all your dependencies would break.
Here's an example I -just- had, actually, in production code (not in OCaml; below is pseudocode). It's not super powerful, but it made me happy because it turned what would have been a good 30 minutes to refactor and re-test into a quick 1 minute task.
I had written a synchronous interface for some functionality, that had quite a bit of input data. It called an external web api twice, once to post some data, then a recursive check to periodically ping the API until some changes took effect (yes, none of this was ideal, but I couldn't change the API).
I later realized that the code calling this interface needed to do some work in between these two calls. To refactor it into two calls would be a lot of work, and require a lot of book keeping, passing variables around or recalculating them, etc, and bloat the code.
Instead, I just wrapped the second call in a closure, changing the interface; now rather than returning the result of that second function, it just returned that second function, which the calling code could invoke after it did its work.
That is, I went from
calling_func() ->
Val = interface();
...
interface() ->
...//Do stuff to calculate vars
do_work1();
do_work2(Var1, Var2, ...);
to
calling_func() ->
SynchFunc = interface();
...//Do whatever needs to happen between the two calls
Val = SynchFunc();
...
interface() ->
...//Do stuff to calculate vars
do_work1();
fun() -> do_work2(Var1, Var2, ...) end;
I could also have done (provided I just needed side effects, not values) -
calling_func() ->
Val = interface(fun() -> ... end);
interface(Func) ->
...//Do stuff to calculate vars
do_work1();
Func();
do_work2(Var1, Var2, ...);
to achieve the same effect, depending on how I want the interface to behave. I could also keep all existing calls working if my language supports multiple function arities, with
interface() -> interface(fun() -> pass; end)
or similar. The thing that closures give you, that I love, is that utility. I can minimally touch a function to inject entire chunks of functionality, without having to do major re-architecturing.
This is hard to answer because an honest answer is "practically everywhere". First class functions, used properly, will take over every aspect of a program.
Here's a neat example from a paper which tried to compare programming speed between functional, oo, imperative languages [0]. We'd like to build a "shape server" which allows you to build geometries of overlapping shapes and query as to whether a given point (in longitude/latitude) is covered by your shapes. The idea was to model a radar or early engagement system or something like that.
The obvious way might be to build a whole nest of objects which communicate among one another to consider the formation of the geometry. Another method is to just use functions from points to booleans which model the eventual question "is this point covered".
type Geometry = (Lat, Long) -> Bool
type Radius = Double
type Length = Double
circle :: Radius -> (Lat, Long) -> Geometry
circle rad (x0, y0) (x1, y1) = sqrt (dx*dx + dy*dy) where
dx = x0 - x1
dy = y0 - y1
square :: Length -> Length -> (Lat, Long) -> Geometry
square width height (top, left) (x, y) =
y < top
&& y > top - height
&& x > left
&& x < left + width
So here we build our geometry straight out of lambdas. A Geometry is just a function from (Lat, Long) to Bool and we generate them through partial application. We can also combine them
This example is not very different from an object oriented approach (an opaque interface with a "contains" method). That said, in a functional setting the tail recursion is great for functions like union and intersect.
Sure, and functions can feel a lot like OO. I often think of it as though OO were blown apart into all of its constituent parts and those parts were made available. Then, further, those parts "hang together" better than the variety of OO formalisms ever did anyway.
One instance would be function composition. If functions are values in your language, you can define function composition in the language, that is given a function f : a -> b and g : b -> c, you can define their composition g . f : a -> c as
g . f = \x -> g(f(x))
(Here \ denotes lambda)
Why would it be useful to have function composition in your language? Well it gives you similar power as "method chains" in an object oriented language, without being tied to specific classes, especially if the language also supports polymorphic functions. It also interacts nicely with other abstractions usually found in functional languages: For example consider map, of Map-Reduce fame
map : (Functor f) => (a -> b) -> (f a -> f b)
then one has
map (g . f) = map g . map f
Now imagine that map would cause the function to be send to thousands of nodes in a cluster, then the above identity tells you that instead of doing that twice, once for f and once for g, you might aswell take g . f and send it out once. Also say you would for some reason know that f . g = id, the identity function, then
map id = id,
so you would not need to do anything. This might appear trivial, but if you can teach the compiler about those cases, you can do interesting stuff with it. In the case of GHC (the Glasgow Haskell Compiler), it is able to use such rules in its optimization phase, which allows people to write apparently inefficient but declarative code and let the compiler eliminate intermediate values. See for example https://hackage.haskell.org/package/repa.
Why do you need lambdas/closures in order to have function composition? Don't you just need higher order functions?
The thing about map id = id etc. probably has more to do with equational reasoning (can use equals to substitute terms, since there are no side effects, at least in Haskell), but I don't see the connection to lambdas/closures.
The function returned by 'compose' is a closure because it captures references to its local environment (the two functions passed to 'compose'). If it did not close over these variables, it would not work. It might be possible to define a limited 'compose' operator in a language without closures that worked at compile-time/define-time, but you wouldn't be able to choose functions to compose at run-time like you could with a capturing 'compose.'
Nitpick: Lambdas and closures are different things. A closure is a semantic notion of a function captures its local environment. A lambda is a mostly-syntactic notion of defining a function without giving a name. Whether a lambda is a closure depends on the language's scoping rules.
What you need is that functions are values in your language. Lambdas are just a notation for function values. Typed lambda calculus is the internal language of cartesian closed categories and function values are then called internal morphisms. The composition above is then the internal composition of internal morphisms. It would be possible for external composition to be already defined by the language, take the unix shell for example, with its buildin "|" operator. But if you want to be able to define function composition within the language, you need to have something like lambda.
> What you need is that functions are values in your language.
Well yeah, that's what I meant by higher order functions.
> Lambdas are just a notation for function values.
But regular (named) functions can still be used as function values. So this doesn't explain why you need things like lambdas in order to implement function composition.
> Typed lambda calculus is the internal language of cartesian closed categories and function values are then called internal morphisms. The composition above is then the internal composition of internal morphisms.
Ok bud.
> But if you want to be able to define function composition within the language, you need to have something like lambda.
Well I could implement function composition without the syntactic construct lambda:
(.) g f x = g (f x)
I am not using any lambdas, in the sense of anonymous functions or closures. To implement function composition with a lambda is more of a stylistic choice, in this case. Granted, maybe functions-used-as-values are also lambdas, for all I know.
Interesting. I might say that partial application is a kind of closure. Certainly, it winds up the same - "function carrying some data that it uses internally". I think you are correct that compose and apply does not require closures of any sort.
the "with-" pattern (originally from lisp, i believe, but ruby did a lot to bring it to the masses), where something like a filehandle manages its own lifecycle, and calls your closure in between. so rather than the C-like
let f = open-file-for-writing(filename);
for line in array {
write-line-to-file(f, line);
}
close-file(f);
you can do
with-open-file-for-writing(filename) {|f|
for line in array {
write-line-to-file(f, line);
}
}
where the definition of with-open-file-for-writing() would look like
def with-open-file-for-writing(filename, closure) {
let f = open-file-for-writing(filename);
call-closure(closure, f);
close-file(f);
}
the benefit of having this be a closure rather than just a function pointer can be seen in the write array to file example above, where the "array" variable is in the scope of the calling function, but when with-open-file-for-writing calls your closure it can make full use of its own local variables.
IMO, the biggest downside there being how far it typically pushes the definition of that function from the call site. Small functions - a good practice anyway - ameliorates that a bit.
you can, but it's sufficiently clunky that it simply doesn't feel like a natural thing to do in the language. good language design is a lot more about the things it makes easy and natural than the things it makes possible.
One simple use I like a lot is using tail recursion as a replacement for gotos. Its great for state machines and other "algorithmy" tasks. You get the benefits of gotos (the code you write is the same as the code you think) but the end result is actually manageable.
Your thoughts match up with Anders Breivik living in a part of Oslo, a city with a large portion of early-generation immigrants from non-western areas, which was predominantly white.
Also, Raspbian Wheezy includes a rather old GHC which is troublesome if you want to install packages from Hackage and backporting a newer GHC doesn't seem to be possible, it crashes some hours into the build process (which also takes ages).