This original sense of 'call' (deriving from the 'call number' used to organize books and other materials in physical libraries) was also responsible for the coinage of 'compiler', according to Grace Hopper: 'The reason it got called a compiler [around 1952] was that each subroutine was given a "call word", because the subroutines were in a library, and when you pull stuff out of a library you compile things. It's as simple as that.'
OldfieldFund 4 hours ago [-]
I invoke them :]
lo_zamoyski 17 minutes ago [-]
The functional peeps even `apply` them.
tshaddox 2 hours ago [-]
Synonyms of "invoke" include "call forth" and "conjure up."
QuercusMax 26 minutes ago [-]
Or a "call sheet", which is the list of cast and crew needed for a particular film shoot
inglor_cz 27 minutes ago [-]
Same here, but I will say "a function call", not "a function invocation".
Invoking X sounds deliciously alchymistic, by the way.
liotier 6 hours ago [-]
I just connected the dots... The identifier digits in the Dewey Decimal classification are called "call numbers" !
dmd 6 hours ago [-]
Yes, that's in the second paragraph of the article.
seanc 5 hours ago [-]
I think Library Science has contributed much more to modern computing than we ever realize.
For example, I often bring up images of card catalogs when explaining database indexing. As soon as people see the index card, and then see that there is a wooden case for looking up by Author, a separate case for looking up by Dewey Decimal et. cet. the light goes on.
I’m old enough to have used (book) dictionaries and wooden case card catalogues in the local library. So when I learned about hashmaps/IDictonary a quarter century ago, that’s indeed the image that helped me grok the concept.
However, the metaphor isn’t that educationally helpful anymore. On more than one occasion I found myself explaining how card catalogues or even (book) dictionaries work, only to be met with the reply: “oh, so they’re basically analogue hashmaps”.
tshaddox 2 hours ago [-]
I always think of the indexes in the back of books as the origin of the term in computing. The relationship to "index cards" never even occurred to me!
kgwgk 2 hours ago [-]
Index cards are not different from index entries in a book. Index is “indicator” or “pointer” in Latin (hence the name of the finger).
tshaddox 44 minutes ago [-]
Yeah, I've got the vocab, I just never associated index cards with that use case because growing up we only ever used index cards for labeling, note-taking, arts and crafts, and flash cards.
evv 3 hours ago [-]
A few months ago I was asking myself, why is the "standard" width of a terminal 80 characters? I assumed it had to do with the screen size of some early PCs.
But nope, its because a punch card was 80 characters wide. And the first punch cards were basically just index cards. Another hat tip to the librarians.
I guess this is the computing equivalent of a car being the width of two horse's asses...
tshaddox 47 minutes ago [-]
And the use of punch cards in computing is (arguably) inspired by the textile industry. Punched cards were used to configure looms starting way back in the 1700s.
ssalazar 57 seconds ago [-]
For those who aren't already familiar, James Burke in Connections has a great summary/rundown of this technological progression from Jacquard loom to census tabulator to computer punchcard, starting around the 36 minute mark here (though the whole video is worth watching).
Absolutely! I confess I assumed this was explicitly part of how things were taught. With the "projected" attributes in the index being what you would fit on a card. I'm surprised that so many seem to not have any mental model for how this stuff works.
charcircuit 4 hours ago [-]
Even if it did contribute more, it still contributed an absolutely miniscule amount to modern computing.
Swiffy0 12 hours ago [-]
I'm Finnish and in in Finnish we translate "call" in function context as "kutsua", which when translated back into English becomes "invite" or "summon".
So at least in Finnish the word "call" is considered to mean what it means in a context like "a mother called her children back inside from the yard" instead of "call" as in "Joe made a call to his friend" or "what do you call this color?".
Just felt like sharing.
morningsam 3 hours ago [-]
In German, we use "aufrufen", which means "to call up" if you translate it fragment-by-fragment, and in pre-computer times would (as far as I know) only be understood as "to call somebody up by their name or nummer" (like a teacher asking a student to speak or get up) when used with a direct object (as it is for functions).
It's also separate from the verb for making a phone call, which would be "anrufen".
kqr 40 minutes ago [-]
Interesting! Across the lake in Sweden we do use "anropa" for calling subprograms. I've never heard anyone in that context use "uppropa" which would be the direct translation of aufrufen.
TrackerFF 30 minutes ago [-]
In Norway it is «funksjonskall», or literally function call. And the «kall» / «call» is just that, a call for something.
Night_Thastus 2 hours ago [-]
'Summon' implies a bit of eldritch horror in the code, which is very appropriate at times. 'Invite' could also imply it's like a demon or vampire, which also works!
ivanjermakov 4 hours ago [-]
In russian it's kind of similar, back translation is "call by phone", "summon", "invite".
1 hours ago [-]
olegp 7 hours ago [-]
Unrelated, but if you happen to be in Helsinki, you should join the local Hacker News meetup: https://bit.ly/helsinkihn
> … if, as a result of some error on the part of the programmer, the order Z F does not get overwritten, the machine will stop at once. This could happen if the subroutine were not called in correctly.
> It will be noted that a closed subroutine can be called in from any part of the program, without restriction. In particular, one subroutine can call in another subroutine.
See also the program on page 33.
The Internet Archive has the 1957 edition of the book, so I wasn’t sure if this wording had changed since the 1951 edition. I couldn’t find a paper about EDSAC from 1950ish that’s easily available to read, but [here’s a presentation with many pictures of artefacts from EDSAC’s early years](https://chiphack.org/talks/edsac-part-2.pdf). It has a couple of pages from the 1950 “report on the preparation of programmes for the EDSAC and the use of the library of subroutines” which shows a subroutine listing with a comment saying “call in auxiliary sub-routine”.
userbinator 16 hours ago [-]
Somewhat less frequently, I also hear "invoke" or "execute", which is more verbose but also more generic.
Incidentally, I find strange misuses of "call" ("calling a command", "calling a button") one of the more grating phrases used by ESL CS students.
spudlyo 15 hours ago [-]
Invoke comes from Latin invocō, invocāre, meaning “to call upon”. I wouldn’t view it as a misuse, but rather a shortening.
thaumasiotes 14 hours ago [-]
> Invoke comes from Latin invocō, invocāre, meaning “to call upon”.
(In the way you'd call upon a skill, not in the way you'd call upon a neighbor.)
coldtea 7 hours ago [-]
But vocare (the voco in invoco) is how you'd call a neighbor
marcosdumay 3 hours ago [-]
It's calling a person like by saying their name loudly.
exe34 7 hours ago [-]
Which fits nicely for calling a function - you use its skill, you don't call for a chat.
pansa2 15 hours ago [-]
> strange misuses of "call"
My favourite (least favourite?) is using “call” with “return”. On more than one occasion I’ve heard:
“When we call the return keyword, the function ends.”
jamesfinlayson 14 hours ago [-]
I remember someone in university talking about the if function (which ostensibly takes one boolean argument).
Delphiza 10 hours ago [-]
In Excel formulas everything is a function. IF, AND, OR, NOT are all functions. It is awkward and goes against what software devs are familiar with, but there are probably more people familiar with the Excel IF function than any other forms. Here is an example taken from the docs... =IF(AND(A3>B2,A3<C2),TRUE,FALSE)
EForEndeavour 9 hours ago [-]
Excel cell formulas are the most widely used functional programming language in the world.
weinzierl 14 hours ago [-]
Sounds like something Prof. John Ousterhout would say:-; The place where this was literally accurate would be Tcl.
I don't know enough Smalltalk to be sure but I think to remember it has a similar approach of everything is an object and I wouldn't be surprised if they'd coerced control flow somehow into this framework.
Also Forth comes to mind, but that would probably be a stretch.
zahlman 4 hours ago [-]
> I don't know enough Smalltalk to be sure but I think to remember it has a similar approach of everything is an object and I wouldn't be surprised if they'd coerced control flow somehow into this framework.
I would include the cond function from lisp, or the generalization from lambda calculus
λexpr1.λexpr2.λc.((c expr1) expr2)
ahartmetz 2 hours ago [-]
I frequently see people treating if as if it was "taking a comparison", so: if (variable == true) ...
Findecanor 14 hours ago [-]
There are languages in which `if` is a function.
In in Tcl, `if` is called a "command".
spacechild1 7 hours ago [-]
Also in Smalltalk and sclang (Supercollider language)
kitd 6 hours ago [-]
Or anything Lispy
Zambyte 6 hours ago [-]
If takes two or three arguments, but never one. The condition is the one made syntactically obvious in most languages, the consequent is another required argument, and the alternative is optional.
dylan604 6 hours ago [-]
Huh? if (true) {} takes precisely one argument.
mjburgess 6 hours ago [-]
That's an application of `if` with one of the arguments empty.
The semantics of `if` requrie at least, `if(cond, clause)`, though more generally, `if(cond, clause, else-clause)`
devnullbrain 5 hours ago [-]
You and Zambyte are both doing the same thing the top level comment is complaining about.
They aren't talking about C and its descendants in particular, but more generally. For example in Haskell and Scheme there is only an if function and no if statement. And you're welcome to create an if function in any language you like and use it instead of the native syntax. I like to use an if function in PostgreSQL because it's less cumbersome than a case expression.
So in the abstract, if is a ternary function. I think the original comment was reflecting on how "if (true) ... " looks like a function call of one argument but that's obviously wrong.
disconcision 4 hours ago [-]
this is not quite right. haskell and scheme have if expressions, not if statements. that's not the same as if being a function. if is not, and cannot be, a function in scheme, as it does not have scheme function semantics. specifically, it is not strict, as it does not evaluate all its subexpressions before executing. since haskell is non-strict, if can be implemented as a function, and iirc it is
trealira 3 hours ago [-]
> since haskell is non-strict, if can be implemented as a function, and iirc it is
"If" can be implemented as a function in Haskell, but it's not a function. You can't pass it as a higher-order function and it uses the "then" and "else" keywords, too. But you could implement it as a function if you wanted:
if' :: Bool -> a -> a
if' True x _ = x
if' False _ y = y
Then instead of writing something like this:
max x y = if x > y then x else y
You'd write this:
max x y = if' (x > y) x y
But the "then" and "else" remove the need for parentheses around the expressions.
devnullbrain 5 hours ago [-]
Arguments are expressions in Haskell. In abstract, it uses expressions.
antonvs 12 hours ago [-]
Try implementing that in most languages and you'll run into problems.
In an imperative programming language with eager evaluation, i.e. where arguments are evaluated before applying the function, implementing `if` as a function will evaluate both the "then" and "else" alternatives, which will have undesirable behavior if the alternatives can have side effects.
In a pure but still eager functional language this can work better, if it's not possible for the alternatives to have side effects. But it's still inefficient, because you're evaluating expressions whose result will be discarded, which is just wasted computation.
In a lazy functional language, you can have a viable `if` function, because it will only evaluate the argument that's needed. But even in the lazy functional language Haskell, `if` is implemented as built-in syntax, for usability reasons - if the compiler understands what `if` means as opposed to treating it as an ordinary function, it can optimize better, produce better messages, etc.
In a language with the right kind of macros, you can define `if` as a macro. Typically in that case, its arguments might be wrapped in lambdas, by the macro, to allow them to be evaluated only as needed. But Scheme and Lisp, which have the right kind of macros, don't define `if` as a macro for similar reasons to Haskell.
One language in which `if` is a function is the pure lambda calculus, but no-one writes real code in that.
The only "major" language I can think of in which `if` is actually a function (well, a couple of methods) is Smalltalk, and in that case it works because the arguments to it are code blocks, i.e. essentially lambdas.
tl;dr: `if` as a function isn't practical in most languages.
igouy 1 hours ago [-]
Isn't practical in Smalltalk either, so the compiler does something special:
ifFalse: alternativeBlock
"Answer the value of alternativeBlock. Execution does not actually
reach here because the expression is compiled in-line."
^alternativeBlock value
antonvs 48 minutes ago [-]
Oh thanks, I didn't know that. I thought it just relied on the explicit code blocks.
But yeah, this is a pretty critical point for optimizations - any realistic language is likely to optimize this sooner or later.
immibis 7 hours ago [-]
I don't think Haskell needs 'if' to be a construct for compiler optimization reasons; it could be implemented easily enough with pattern matching:
if' :: Bool -> a -> a -> a
if' True x _ = x
if' False _ y = y
The compiler could substitute this if it knew the first argument was a constant.
Maybe it was needed in early versions. Or maybe they just didn't know they wouldn't need it yet. The early versions of Haskell had pretty terrible I/O, too.
antonvs 3 hours ago [-]
With a function version of `if`, in general the compiler needs to wrap the alternative in closures ("thunks"), as it does with all function arguments unless optimizations make it unnecessary. That's never needed in the syntactic version. That's one significant optimization.
In GHC, `if` desugars to a case statement, and many optimizations flow from that. It's pretty central to the compiler's operation.
> Maybe it was needed in early versions. Or maybe they just didn't know they wouldn't need it yet.
Neither of these are true. My comment above was attempting to explain why `if` isn't implemented as a function. Haskell is a prime example of where it could have been done that way, the authors are fully aware of that, but they didn't because the arguments against doing it are strong. (Unless you're implementing a scripting-language type system where you don't care about optimization.)
fn-mote 6 hours ago [-]
A short search lead to this SE post [1], which doesn't answer the "why" but says "if" is just syntactic sugar that turns into `ifThenElse`...
The post claims that this is done in such a basic way that if you have managed to rebind `ifThenElse`, your rebound function gets called. I didn't confirm this, but I believed it.
3 hours ago [-]
layer8 4 hours ago [-]
Some people use parentheses for the return value, to make it look like a function call:
return(value);
pwdisswordfishz 13 hours ago [-]
Eh, "return" is just a very restricted continuation with special syntax… it's a stretch to say you "call" it, but not unjustified.
userbinator 14 hours ago [-]
I've heard that too --- the voice in my head automatically read it in the customary thick Indian accent.
zahlman 4 hours ago [-]
>Incidentally, I find strange misuses of "call" ("calling a command", "calling a button") one of the more grating phrases used by ESL CS students.
From my own experience, native speakers (who are beginners at programming) also do this. They also describe all kinds of things as "commands" that aren't.
Dwedit 15 hours ago [-]
C# seems to like to use "Invoke" for things like delegates or reflected methods. Then it proceeds to use "Call Stack" in the debugger view.
4 hours ago [-]
Zambyte 6 hours ago [-]
Microsoft devs get paid by the character, I'm not sure that counts.
treyd 15 hours ago [-]
I actually see the converse often with novices often, referring to statements (or even entire function decls) as "commands".
kragen 15 hours ago [-]
"Command" is a better term for what we call "statements" in imperative programming languages. "Statement" in this context is an unfortunate historical term; except in Prolog, these "statements" don't have a truth-value, just an effect. (And in Prolog we call them "clauses" instead.)
9rx 1 hours ago [-]
Is it? "Statement", defined by the dictionary as "the expression of an idea or opinion through something other than words.", seems quite apt. Symbols may end up resembling words, which perhaps is your contention, but technically they are only symbols.
Best I can tell, all usable definitions surrounding "Command" seem to suggest an associated action, which isn't true of all statements in imperative programming.
adrian_b 5 hours ago [-]
True.
In many early computer programming documents the term "order" was used instead of "statement", where "order" was meant as a synonym for "command" and not as referring to the ordering of a sequence.
kragen 4 hours ago [-]
Occasionally, but much more often (as in Mauchly's cited paper) an "order" was a machine instruction, not a high-level language "statement".
adrian_b 3 hours ago [-]
Yes, but that is mostly because in the first few years (including by the time of Mauchly), there were no "high-level" programming languages, so the "orders" composing the text of a program corresponded to instructions directly executable by the machine.
I believe that the term "statement" has been imposed by the IBM publications about FORTRAN, starting in 1956.
Before the first public documents about IBM FORTRAN, the first internal document about FORTRAN, from 1954, had used the terms "formula" for anything that later would be called "executable statement", i.e. for many things that would not have been called formulas either before or after that, like IF-formulas, DO-formulas, GOTO-formulas and so on, and the document had used "sentence" for what later would be called "non-executable statements" (i.e. definitions or declarations).
Before FORTRAN (1951 to 1953), for his high-level programming language Heinz Rutishauser had used the term "Befehl", which means "command". (For what we name today "program", he had used the term "Rechenplan", which means "computation plan".)
1718627440 5 hours ago [-]
On an old Nokia you follow links by pressing the call button.
kgwxd 6 hours ago [-]
Now that it's commonly "tap or click the button" I might be down with the next gen using "call". Anything, as long as they don't go with "broh, the button".
smartaz42 1 hours ago [-]
I've always thought it odd that the one thing the caller and callee need to agree on are called arguments.
skaushik92 16 hours ago [-]
> ... but those of any complexity presumably ought to be in a library — that is, a set of magnetic tapes in which previously coded problems of permanent value are stored.
Oddly, I never thought of the term library as originating from a physical labelled and organized shelf of tapes, until now.
leoc 7 hours ago [-]
At https://youtu.be/DjhRRj6WYcs?t=338 you can see EDSAC's original linker, Margaret Hartrey, taking a library subroutine from the drawer of paper tapes. (But you should really watch the whole thing, of course!)
zabzonk 15 hours ago [-]
I've never heard of a library being called anything else - look at the common file extension .lib, for example.
eythian 8 hours ago [-]
I don't see .lib being all that common, but it might just be what I'm used to. `.so` or `.dll` or such sure (though to be fair, the latter does include the word library.)
spacechild1 7 hours ago [-]
.lib is the traditional extension for static libraries and import libraries on Windows. Every .dll has an accompanying .lib. (Msys2 uses their own extensions, namely .a for static libraries and .dll.a for import libraries.)
empath75 7 hours ago [-]
It's not _that_ they are called libraries, but _why_ they are called libraries. I had assumed, like many others that it was purely by analogy (ie, a desktop), and not that the term originated with a physical library of tapes.
ks2048 15 hours ago [-]
There is also the phrase in music, "call and response" - even referencing a return value.
Ferret7446 11 hours ago [-]
Not a scientific theory, but an observation. New words propagate when they "click". They are often short, and for one reason or another enable people to form mental connections and remember what they mean. They spread rapidly between people like a virus. Sometimes they need to be explained, sometimes people get it from context, but afterward people tend to remember them and use them with others, further propagating the word.
A fairly recent example, "salty". It's short, and kinda feels like it describes what it means (salty -> tears -> upset).
It sounds like "call" is similar. It's short, so easy to say for an often used technical term, and there are a couple of ways it can "feel right": calling up, calling in, summoning, invoking (as a magic spell). People hear it, it fits, and the term spreads. I doubt there were be many competing terms, because terms like "jump" would have been in use to refer to existing concepts. Also keep in mind that telephones were hot, magical technology that would have become widespread around this same time period. The idea of being able to call up someone would be at the forefront of people's brains, so contemporary programmers would likely have easily formed a mental connection/analogy between calling people and calling subroutines.
gjm11 6 hours ago [-]
Side-note: for me, at least, "salty" isn't anything to do with tears; in my idiolect when someone's "salty" it doesn't mean they're sad, it means they're angry or offended or something along those lines. The metaphor is more about how salt (in large quantities) tastes strong and sharp.
(Which maybe illustrates that a metaphor can succeed even when everyone doesn't agree about just what it's referring to, as you're suggesting "call" may have done.)
frosted-flakes 4 hours ago [-]
'Salty' in that context means 'bitter', ironically.
layer8 3 hours ago [-]
I would say “resentful”, “disgruntled”, “aggrieved”. “Bitter” feels like a more longer-lasting, somewhat less emotional condition to me.
True enough, there are endless subtleties to language (English in particular) that make words simultaneously vague and extremely specific.
Wiktionary defines bitter as "cynical and resentful", which doesn't quite capture the "more longer-lasting, somewhat less emotional condition" part of it.
leoc 2 hours ago [-]
If the librarian's 'call for' meaning was indeed the one originally intended, then even in Mauchly's 1947 article you can already see slippage towards the more object-oriented or actor-oriented 'call to' meaning.
kgwgk 2 hours ago [-]
That’s a big “if” when the only support for that theory seems to appear a decade later than the “call in” meaning.
kgwgk 51 minutes ago [-]
Why did the wild “called” that dog in Jack London’s novel?
gorgoiler 3 hours ago [-]
Another use of call is in barn / folk / country dancing where a caller will call out the moves. “Swing your partner!”, “Dosey do!”, “Up and down the middle!” etc. Each of these calls describes a different algorithmic step. However, it’s unlikely this etymology has anything to do with function calling: each call modifies the global state of the dancers with no values returned to the caller.
walthamstow 15 hours ago [-]
Off topic somewhat but where the hell did the verb 'jump' come from for video calls? I'm always being asked to jump on a call
mook 14 hours ago [-]
I assume it's because you're doing something you'd rather not do to benefit somebody else, much like you'd jump on a grenade :p
Bluestein 11 hours ago [-]
I'll henceforth take this as the canonical explanation :)
ashdnazg 8 hours ago [-]
I thought it's like hopping onto a bus. Then it doesn't take a lot for "hop" to change into "jump".
klodolph 14 hours ago [-]
I always thought it was just a metaphor for suddenly leaving whatever you were doing at the time. You’re doing something else, and then you jump on a call. You don’t mosey on over and show up on the call fifteen minutes later.
frou_dh 8 hours ago [-]
It seems like it has the connotation of being spontaneous and not requiring preparation.
Brian_K_White 13 hours ago [-]
You would jump on a call before video was involved. Not even necessarily a conference call either, you could jump on the horn etc.
It just means to start doing something, no great mystery.
IshKebab 13 hours ago [-]
That's just a standard meaning of the word jump. You're jumping from whatever you were doing to a video call.
lgas 14 hours ago [-]
This was a result of Zoom's acquisition of the band House of Pain.
toast0 14 hours ago [-]
Thought it was Kris Kross.
weinzierl 11 hours ago [-]
When in reality it was Van Halen.
11 hours ago [-]
kragen 14 hours ago [-]
Maybe it's a hint that the operation is irreversible! :-)
empath75 7 hours ago [-]
That has nothing to do with the video aspect, but the group aspect. "Jump", "Hop" and the like are making a group call analogous to a bus ride, where people can jump on and off.
pixelpoet 10 hours ago [-]
Another interesting one is in games when an effect is said to "proc", which I guess is from a procedure getting called.
s3krit 7 hours ago [-]
Basically, yeah. It actually has its origin in MUDs, from 'spec_proc', short for special procedure.
7 hours ago [-]
zahlman 4 hours ago [-]
My friends always seemed to think this one comes from "procure".
kragen 15 hours ago [-]
It's interesting to think of "calling" as "summoning" functions. We could also reasonably say "instantiating", "evaluating", "computing", "running", "performing" (as in COBOL), or simply "doing".
In Mauchly's "Preparation of Problems for EDVAC-Type Machines", quoted in part in the blog post, he writes:
> The total number of operations for which instructions must be provided will
usually be exceedingly large, so that the instruction sequence would be far in excess of the
internal memory capacity. However, such an instruction sequence is never a random sequence,
and can usually be synthesized from subsequences which frequently recur.
> By providing the necessary subsequences, which may be utilized as often as desired, together
with a master sequence directing the use of these subsequences, compact and easily set up
instructions for very complex problems can be achieved.
The verbs he uses here for subroutine calls are "utilize" and "direct". Later in the paper he uses the term "subroutine" rather than "subsequence", and does say "called for" but not in reference to the subroutine invocation operation in the machine:
> For these,
magnetic tapes containing the series of orders required for the operation can be prepared once
and be made available for use when called for in a particular problem. In order that such
subroutines, as they can well be called, be truly general, the machine must be endowed with
the ability to modify instructions, such as placing specific quantities into general subroutines.
Thus is created a new set of operations which might be said to form a calculus of instructions.
Of course nowadays we do not pass arguments to subroutines by modifying their code, but index registers had not yet been invented, so every memory address referenced had to be contained in the instructions that referenced it. (This was considered one of the great benefits of keeping the program in the data memory!)
A little lower down he says "initiate subroutines" and "transferring control to a subroutine", and talks about linking in subroutines from a "library", as quoted in the post.
He never calls subroutines "functions"; I'm not sure where that usage comes from, but certainly by BASIC and LISP there were "functions" that were at least implemented by subroutines. He does talk about mathematical functions being computed by subroutines, including things like matrix multiplication:
> If the subroutine is merely to calculate a
function for a single argument, (...)
weinzierl 9 hours ago [-]
"He never calls subroutines "functions"; I'm not sure where that usage comes from, but certainly by BASIC and LISP there were "functions" that were at least implemented by subroutines."
I think the early BASIC's used the subroutine nomenclature for GOSUB, where there was no parameter passing or anything, just a jump that automatically remembered the place to return.
Functions in BASIC, as I remember it, were something quite different. I think they were merely named abbreviations for arithmetic expressions and simple one line artithmetic expressions only. They were more similar to very primitive and heavily restricted macros than to subroutines or functions.
kragen 9 hours ago [-]
Right, that's what Algol-58 functions were, too. I think FORTRAN also has a construct like this, but I forget.
bregma 4 hours ago [-]
FORTRAN had both functions and subroutines. A function returned a value and was invoked in an expression (eg. S=SIN(A)). A subroutine was invoked by calling it (eg. CALL FOPEN(FNAME, PERMS)).
kragen 3 hours ago [-]
I should probably just Google this, but how did you define the functions?
bregma 36 minutes ago [-]
C -------- START OF FUNCTION -------
INTEGER FUNCTION INCREMENT(I)
INCREMENT=I+1
RETURN
END
C -------- END OF FUNCTION -------
kps 10 minutes ago [-]
FORTRAN also had single-expression function definitions, e.g.
ARGF(X, Y, Z) = (D/E) * Z+ X** F+ Y/G
Naturally this is syntactically identical to an array element assignment, which is one of the many things that made compiling FORTRAN so much fun.
seabass 15 hours ago [-]
I love this sort of cs history. I’m also curious—why do we “throw” an error or “raise” an exception? Why did the for loop use “for” instead of, say, “loop”?
antod 15 hours ago [-]
I'm guessing "throw" came about after someone decided to "catch" errors.
As for "raise", maybe exceptions should've been called objections.
zahlman 4 hours ago [-]
RuntimeObjection: autopsy report changed contents during cross-examination
yongjik 13 hours ago [-]
It's been ages, but I think an earlier edition of Stroustrup's The C++ Programming Language explains that he specifically chose "throw" and "catch" because more obvious choices like "signal" were already taken by established C programs and choosing "unusual" words (like "throw" and "catch") reduced chance of collision. (C interoperability was a pretty big selling point in the early days of C++.)
jamesemmott 10 hours ago [-]
The design of exception handling in C++ was inspired by ML, which used 'raise', and C++ might itself have used that word, were it not already the name of a function in the C standard library (as was 'signal'). The words 'throw' and 'catch' were introduced by Maclisp, which is how Stroustrup came to know of them. As he told Guy Steele at the second ACM History of Programming Languages (HOPL) conference in 1993, 'I also think I knew whatever names had been used in just about any language with exceptions, and "throw" and "catch" just were the ones I liked best.'
flufluflufluffy 15 hours ago [-]
I think “raise” comes from the fact that the exception propagates “upward” through the call stack, delegating the handling of it to the next level “up.” “Throw” may have to do with the idea of not knowing what to do/how to handle an error case, so you just throw it away (or throw your hands up in frustration xD). Totally just guessing
owlbite 15 hours ago [-]
I suspect it comes from raising flags/signals (literally as one might run a flag up a flag pole?) to indicates CPU conditions, and then that terminology getting propagated from hw to sw.
reverendsteveii 3 hours ago [-]
idk because there are some circles in which boolean variables are called flags but I've never seen them referred to as being raised or unraised/lowered, only set and unset
7 hours ago [-]
Findecanor 14 hours ago [-]
Sounds plausuble. Some of the earliest exception handling systems did not have any semantic difference between CPU exceptions and software exceptions.
devnullbrain 5 hours ago [-]
You can still use SIGUSR1 and SIGUSR2 for it.
comex 14 hours ago [-]
I would have thought it came from the concept of 'raising an issue' or even 'raising a stink'.
baq 14 hours ago [-]
You throw something catchable and if you fail to catch it it’ll break. Unless it’s a steel ball.
You raise flags or issues, which are good descriptions of an exception.
titanomachy 15 hours ago [-]
That's a great question. The first language I learned was python, and "for i in range(10)" makes a lot of sense to me. But "for (int i = 0; i < 10; i++)" must have come first, and in that case "for" is a less obvious choice.
Dwedit 15 hours ago [-]
BASIC had the FOR-NEXT loop back in 1964.
10 FOR N = 1 TO 10
20 PRINT " ";
30 NEXT N
C language would first release in 1972, that had the three-part `for` with assignment, condition, and increment parts.
kahirsch 13 hours ago [-]
This reminds me of a little bit of trivia. In very old versions of BASIC, "FORD=STOP" would be parsed as "FOR D = S TO P".
I found that amusing circa 1975.
zabzonk 15 hours ago [-]
In Fortran, it is a do-loop :)
bee_rider 14 hours ago [-]
Fortran has grown a lot over time. If somebody said it don’t have a do loop in 196X, I wouldn’t be too surprised.
Really it’s just syntactic sugar, just use a goto.
bregma 4 hours ago [-]
FORTRAN IV, at least the version I used on the PDP-11 running RSX, did not have a DO-loop. Just IF and GO TO. But it did have both logical and arithmetic IF.
pklausler 13 hours ago [-]
The entire point of Fortran was being an effective optimizing compiler for DO loops.
masklinn 15 hours ago [-]
FOR comes from ALGOL in which as far as I know is was spelled:
for p := x step d until n do
QuesnayJr 15 hours ago [-]
Algol 58 had "for i:=0(1)9". C's for loop is a more general variant.
gmueckl 13 hours ago [-]
"For" for loop statements fits with math jargon: "for every integer i in the set [1:20], ..."
pansa2 14 hours ago [-]
“for” is short for “for each”, presumably. `for i in 1..=10` is short for “for each integer i in the range 1 to 10”.
amelius 4 hours ago [-]
Huh, I gosub my functions ...
burnt-resistor 11 hours ago [-]
Because, obviously, we stand out in a field of the code segment and shout the address we which to jump to or push onto the stack. ;)
dleeftink 16 hours ago [-]
Also interesting to contrast this to invocation or application (e.g. to invoke or apply). I'm sure there are fair few 'functional dialects' out there!
vincent-manis 3 hours ago [-]
GLENDOWER: I can call spirits from the vasty deep.
HOTSPUR: Why, so can I, or so can any man;
But will they come when you do call for them?
-- Henry the Fourth, Part 1
ranger_danger 16 hours ago [-]
I seem to remember people used to say "call it up" when asking an operator to perform a function on a computer when the result was displayed in front of the user.
You “call upon” the function to perform a task, or return a value as the case may be. Just as you may call upon a servant or whatever.
thaumasiotes 14 hours ago [-]
The answer is in the article. You "call" functions because they are stored in libraries, like books, and like books in libraries, when you want them, you identify which one you want by specifying its "call number".
kgwgk 4 hours ago [-]
Another answer also in the article is that you call them like you call a doctor to your home, so they do something for you, or how people are “on call”.
The “call number” in that story comes after the “call”. Not the other way around.
lubosm 14 hours ago [-]
I'd like to point at CALL, a CPU instruction, and its origins. I'm not familiar with this, but it could reveal more than programming languages. The instruction is present at least since first intel microprocessors and microcontrollers were designed.
aitchnyu 14 hours ago [-]
I kept scrolling expecting to see this story:
> Dennis Ritchie encouraged modularity by telling all and sundry that function calls were really, really cheap in C. Everybody started writing small functions and modularizing. Years later we found out that function calls were still expensive on the PDP-11, and VAX code was often spending 50% of its time in the CALLS instruction. Dennis had lied to us! But it was too late; we were all hooked...
The first Intel microcontrollers were the 8008 and the 4004, designed in 01971. This is 13 years after this post documents the "CALL X" statement of FORTRAN II in 01958, shortly after which (he documents) the terminology became ubiquitous. (FORTRAN I didn't have subroutines.)
> A procedure statement serves to initiate (call for) the execution of a procedure, which is a closed and self-contained process with a fixed ordered set of input and output parameters, permanently defined by a procedure declaration. (cf. procedure declaration.)
Note that this does go to extra pains to include the term "call for", but does not use the phraseology "call a procedure". Rather, it calls for not the procedure itself, but the execution of the procedure.
However, it also uses the term "procedure call" to describe either the initiation or the execution of the procedure:
> The procedure declaration defining the called procedure contains, in its heading, a string of symbols identical in form to the procedure statement, and the formal parameters occupying input and output parameter positions there give complete information concerning the admissibility of parameters used in any procedure call, (...)
Algol 58 has a different structure for defining functions rather than procedures, but those too are invoked by a "function call"—but not by "calling the function".
I'm not sure when the first assembly language with a "call" instruction appeared, but it might even be earlier than 01958. The Burroughs 5000 seems like it would be a promising thing to look at. But certainly many assembly languages from the time didn't; even MIX used a STJ instruction to set the return address in the return instruction in the called subroutine and then just jumped to its entry point, and the PDP-10 used PUSHJ IIRC. The 360 used BALR, branch and link register, much like RISC-V's JALR today.
Invoking X sounds deliciously alchymistic, by the way.
For example, I often bring up images of card catalogs when explaining database indexing. As soon as people see the index card, and then see that there is a wooden case for looking up by Author, a separate case for looking up by Dewey Decimal et. cet. the light goes on.
https://en.wikipedia.org/wiki/Library_catalog
However, the metaphor isn’t that educationally helpful anymore. On more than one occasion I found myself explaining how card catalogues or even (book) dictionaries work, only to be met with the reply: “oh, so they’re basically analogue hashmaps”.
But nope, its because a punch card was 80 characters wide. And the first punch cards were basically just index cards. Another hat tip to the librarians.
I guess this is the computing equivalent of a car being the width of two horse's asses...
https://youtu.be/z6yL0_sDnX0?si=NtyyybZSGCKmktdG&t=2150
So at least in Finnish the word "call" is considered to mean what it means in a context like "a mother called her children back inside from the yard" instead of "call" as in "Joe made a call to his friend" or "what do you call this color?".
Just felt like sharing.
It's also separate from the verb for making a phone call, which would be "anrufen".
Page 31 has:
> … if, as a result of some error on the part of the programmer, the order Z F does not get overwritten, the machine will stop at once. This could happen if the subroutine were not called in correctly.
> It will be noted that a closed subroutine can be called in from any part of the program, without restriction. In particular, one subroutine can call in another subroutine.
See also the program on page 33.
The Internet Archive has the 1957 edition of the book, so I wasn’t sure if this wording had changed since the 1951 edition. I couldn’t find a paper about EDSAC from 1950ish that’s easily available to read, but [here’s a presentation with many pictures of artefacts from EDSAC’s early years](https://chiphack.org/talks/edsac-part-2.pdf). It has a couple of pages from the 1950 “report on the preparation of programmes for the EDSAC and the use of the library of subroutines” which shows a subroutine listing with a comment saying “call in auxiliary sub-routine”.
Incidentally, I find strange misuses of "call" ("calling a command", "calling a button") one of the more grating phrases used by ESL CS students.
(In the way you'd call upon a skill, not in the way you'd call upon a neighbor.)
My favourite (least favourite?) is using “call” with “return”. On more than one occasion I’ve heard:
“When we call the return keyword, the function ends.”
I don't know enough Smalltalk to be sure but I think to remember it has a similar approach of everything is an object and I wouldn't be surprised if they'd coerced control flow somehow into this framework.
Also Forth comes to mind, but that would probably be a stretch.
It does. It's been discussed on HN before, even: https://news.ycombinator.com/item?id=13857174
https://news.ycombinator.com/item?id=44513639
In in Tcl, `if` is called a "command".
The semantics of `if` requrie at least, `if(cond, clause)`, though more generally, `if(cond, clause, else-clause)`
e.g. in C:
https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3220.pdf
in C++:https://eel.is/c++draft/gram.stmt
where More examples:https://docs.python.org/3/reference/grammar.html
https://doc.rust-lang.org/reference/expressions/if-expr.html...
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
expression != argument
So in the abstract, if is a ternary function. I think the original comment was reflecting on how "if (true) ... " looks like a function call of one argument but that's obviously wrong.
"If" can be implemented as a function in Haskell, but it's not a function. You can't pass it as a higher-order function and it uses the "then" and "else" keywords, too. But you could implement it as a function if you wanted:
Then instead of writing something like this: You'd write this: But the "then" and "else" remove the need for parentheses around the expressions.In an imperative programming language with eager evaluation, i.e. where arguments are evaluated before applying the function, implementing `if` as a function will evaluate both the "then" and "else" alternatives, which will have undesirable behavior if the alternatives can have side effects.
In a pure but still eager functional language this can work better, if it's not possible for the alternatives to have side effects. But it's still inefficient, because you're evaluating expressions whose result will be discarded, which is just wasted computation.
In a lazy functional language, you can have a viable `if` function, because it will only evaluate the argument that's needed. But even in the lazy functional language Haskell, `if` is implemented as built-in syntax, for usability reasons - if the compiler understands what `if` means as opposed to treating it as an ordinary function, it can optimize better, produce better messages, etc.
In a language with the right kind of macros, you can define `if` as a macro. Typically in that case, its arguments might be wrapped in lambdas, by the macro, to allow them to be evaluated only as needed. But Scheme and Lisp, which have the right kind of macros, don't define `if` as a macro for similar reasons to Haskell.
One language in which `if` is a function is the pure lambda calculus, but no-one writes real code in that.
The only "major" language I can think of in which `if` is actually a function (well, a couple of methods) is Smalltalk, and in that case it works because the arguments to it are code blocks, i.e. essentially lambdas.
tl;dr: `if` as a function isn't practical in most languages.
But yeah, this is a pretty critical point for optimizations - any realistic language is likely to optimize this sooner or later.
if' :: Bool -> a -> a -> a
if' True x _ = x
if' False _ y = y
The compiler could substitute this if it knew the first argument was a constant.
Maybe it was needed in early versions. Or maybe they just didn't know they wouldn't need it yet. The early versions of Haskell had pretty terrible I/O, too.
In GHC, `if` desugars to a case statement, and many optimizations flow from that. It's pretty central to the compiler's operation.
> Maybe it was needed in early versions. Or maybe they just didn't know they wouldn't need it yet.
Neither of these are true. My comment above was attempting to explain why `if` isn't implemented as a function. Haskell is a prime example of where it could have been done that way, the authors are fully aware of that, but they didn't because the arguments against doing it are strong. (Unless you're implementing a scripting-language type system where you don't care about optimization.)
[1]: https://softwareengineering.stackexchange.com/questions/1957...
The post claims that this is done in such a basic way that if you have managed to rebind `ifThenElse`, your rebound function gets called. I didn't confirm this, but I believed it.
From my own experience, native speakers (who are beginners at programming) also do this. They also describe all kinds of things as "commands" that aren't.
Best I can tell, all usable definitions surrounding "Command" seem to suggest an associated action, which isn't true of all statements in imperative programming.
In many early computer programming documents the term "order" was used instead of "statement", where "order" was meant as a synonym for "command" and not as referring to the ordering of a sequence.
I believe that the term "statement" has been imposed by the IBM publications about FORTRAN, starting in 1956.
Before the first public documents about IBM FORTRAN, the first internal document about FORTRAN, from 1954, had used the terms "formula" for anything that later would be called "executable statement", i.e. for many things that would not have been called formulas either before or after that, like IF-formulas, DO-formulas, GOTO-formulas and so on, and the document had used "sentence" for what later would be called "non-executable statements" (i.e. definitions or declarations).
Before FORTRAN (1951 to 1953), for his high-level programming language Heinz Rutishauser had used the term "Befehl", which means "command". (For what we name today "program", he had used the term "Rechenplan", which means "computation plan".)
Oddly, I never thought of the term library as originating from a physical labelled and organized shelf of tapes, until now.
A fairly recent example, "salty". It's short, and kinda feels like it describes what it means (salty -> tears -> upset).
It sounds like "call" is similar. It's short, so easy to say for an often used technical term, and there are a couple of ways it can "feel right": calling up, calling in, summoning, invoking (as a magic spell). People hear it, it fits, and the term spreads. I doubt there were be many competing terms, because terms like "jump" would have been in use to refer to existing concepts. Also keep in mind that telephones were hot, magical technology that would have become widespread around this same time period. The idea of being able to call up someone would be at the forefront of people's brains, so contemporary programmers would likely have easily formed a mental connection/analogy between calling people and calling subroutines.
(Which maybe illustrates that a metaphor can succeed even when everyone doesn't agree about just what it's referring to, as you're suggesting "call" may have done.)
And I agree that it has nothing to do with tears. The actual etymology stems from sailors: https://www.planoly.com/glossary/salty
Wiktionary defines bitter as "cynical and resentful", which doesn't quite capture the "more longer-lasting, somewhat less emotional condition" part of it.
It just means to start doing something, no great mystery.
In Mauchly's "Preparation of Problems for EDVAC-Type Machines", quoted in part in the blog post, he writes:
> The total number of operations for which instructions must be provided will usually be exceedingly large, so that the instruction sequence would be far in excess of the internal memory capacity. However, such an instruction sequence is never a random sequence, and can usually be synthesized from subsequences which frequently recur.
> By providing the necessary subsequences, which may be utilized as often as desired, together with a master sequence directing the use of these subsequences, compact and easily set up instructions for very complex problems can be achieved.
The verbs he uses here for subroutine calls are "utilize" and "direct". Later in the paper he uses the term "subroutine" rather than "subsequence", and does say "called for" but not in reference to the subroutine invocation operation in the machine:
> For these, magnetic tapes containing the series of orders required for the operation can be prepared once and be made available for use when called for in a particular problem. In order that such subroutines, as they can well be called, be truly general, the machine must be endowed with the ability to modify instructions, such as placing specific quantities into general subroutines. Thus is created a new set of operations which might be said to form a calculus of instructions.
Of course nowadays we do not pass arguments to subroutines by modifying their code, but index registers had not yet been invented, so every memory address referenced had to be contained in the instructions that referenced it. (This was considered one of the great benefits of keeping the program in the data memory!)
A little lower down he says "initiate subroutines" and "transferring control to a subroutine", and talks about linking in subroutines from a "library", as quoted in the post.
He never calls subroutines "functions"; I'm not sure where that usage comes from, but certainly by BASIC and LISP there were "functions" that were at least implemented by subroutines. He does talk about mathematical functions being computed by subroutines, including things like matrix multiplication:
> If the subroutine is merely to calculate a function for a single argument, (...)
I think the early BASIC's used the subroutine nomenclature for GOSUB, where there was no parameter passing or anything, just a jump that automatically remembered the place to return.
Functions in BASIC, as I remember it, were something quite different. I think they were merely named abbreviations for arithmetic expressions and simple one line artithmetic expressions only. They were more similar to very primitive and heavily restricted macros than to subroutines or functions.
As for "raise", maybe exceptions should've been called objections.
You raise flags or issues, which are good descriptions of an exception.
10 FOR N = 1 TO 10
20 PRINT " ";
30 NEXT N
C language would first release in 1972, that had the three-part `for` with assignment, condition, and increment parts.
I found that amusing circa 1975.
Really it’s just syntactic sugar, just use a goto.
HOTSPUR: Why, so can I, or so can any man; But will they come when you do call for them?
-- Henry the Fourth, Part 1
A good time to link this classic
The “call number” in that story comes after the “call”. Not the other way around.
> Dennis Ritchie encouraged modularity by telling all and sundry that function calls were really, really cheap in C. Everybody started writing small functions and modularizing. Years later we found out that function calls were still expensive on the PDP-11, and VAX code was often spending 50% of its time in the CALLS instruction. Dennis had lied to us! But it was too late; we were all hooked...
https://www.catb.org/~esr/writings/taoup/html/modularitychap...
In the Algol 58 report https://www.softwarepreservation.org/projects/ALGOL/report/A... we have "procedures" and "functions" as types of subroutines. About invoking procedures, it says:
> 9. Procedure statements
> A procedure statement serves to initiate (call for) the execution of a procedure, which is a closed and self-contained process with a fixed ordered set of input and output parameters, permanently defined by a procedure declaration. (cf. procedure declaration.)
Note that this does go to extra pains to include the term "call for", but does not use the phraseology "call a procedure". Rather, it calls for not the procedure itself, but the execution of the procedure.
However, it also uses the term "procedure call" to describe either the initiation or the execution of the procedure:
> The procedure declaration defining the called procedure contains, in its heading, a string of symbols identical in form to the procedure statement, and the formal parameters occupying input and output parameter positions there give complete information concerning the admissibility of parameters used in any procedure call, (...)
Algol 58 has a different structure for defining functions rather than procedures, but those too are invoked by a "function call"—but not by "calling the function".
I'm not sure when the first assembly language with a "call" instruction appeared, but it might even be earlier than 01958. The Burroughs 5000 seems like it would be a promising thing to look at. But certainly many assembly languages from the time didn't; even MIX used a STJ instruction to set the return address in the return instruction in the called subroutine and then just jumped to its entry point, and the PDP-10 used PUSHJ IIRC. The 360 used BALR, branch and link register, much like RISC-V's JALR today.