My textbook says that following:
Once the operating system decides to create a new process, it allocates space for all elements of the process image; therefore, the OS must know how much space is needed for the private user address space (programs and data) and the user stack.
As I understand it, the stack contains functions and local variables. And since much of the input into functions and the data resulting from any associated computations cannot be known at compile-time, the OS must allocate a static amount of memory to serve as the stack.
Given this, how does the OS determine at compile-time the sufficient amount of memory required by constituents of the stack? Given the dramatic variability of programs, I cannot imagine how the OS achieves this task. It would seem that if one tried to allocate a fixed amount of memory as the stack at compile-time, it would regularly result in either too much or too little memory. However, I presume that there is an effective mechanism in place to deal with this (to allocate an appropriate amount of memory as the stack); otherwise, stack overflows would be a common occurrence.
I would greatly appreciate it if someone could please take the time to clarify this concept.