@frigilux
I'm not old enough to remember the pre-digital days, but I spent a good chunk of my life working in sound engineering / production and I know a lot about the way audio's processed.
I've heard that the audio quality on the old crossbar, relay and step-by-step switches where a call was just routed locally or on very clear lines was absolutely superb because there was no compression. You were actually getting a physical path through the switching system and the connections between the switches were generally either not compressed at all just running on copper circuits in multicore cables or were very basically compressed using frequency division.
The signals were definitely processed in the analogue domain and 'companded' by the circuitry in the switches but they weren't ever sampled or digitally compressed. So, you'd have had a very 'warm' sound, I guess.
TDM (time division multiplexing) arrived in the 1960s and then the switching systems became digital using TDM techniques in the 1970s and 80s mostly in this part of the world and some may even have survived into the 1990s.
All modern era PSTN lines are processed through a companding system known as the µ-law algorithm used in the US, Canada and Japan or a-law in Europe. They're similar but developed in parallel in isolation from each other. Effectively this squeezes your voice into a particular bandwidth (this also happened in the analogue era, but probably not as tightly done)
Then your voice is sampled by the line card in the switch and it's quantised (i.e. converted into a number of discrete values) losing a lot of the finer detail.
The PSTN network basically runs on 8-bit PCM audio streaming at 64kbit/s (you may have heard of a T1 line this comprises of 24 of these, its European counterpart is the E1 carrier line which has 32 channels.
It's sampled at 8kHz
All of that sounds quite "OK" as in normal phone line quality.
When you move into mobile networks things get a lot more bandwidth sensitive and the audio quality definitely drops, particularly on older 2G CODECs that are still possibly in use in some areas of networks.
GSM began with "Half-rate" 6.5 kbit/s, and 13 kbit/s which is used for either "full-rate" with an older CODEC or Enhanced Full Rate (which gives much more landline like quality).
They've both been replaced by "AMR" in more modern networks since the UMTS / 3G systems arrived and are also used in 2G GSM too.
"Adaptive MultiRate audio codec which is a lot more sophisticated.
It literally switches sampling rates and compression depending on the available bandwidth on the link to the network.
Sampling frequency 8 kHz/13-bit (160 samples for 20 ms frames), filtered to 200–3400 Hz.
The AMR codec uses eight source codecs with bit-rates of 12.2, 10.2, 7.95, 7.40, 6.70, 5.90, 5.15 and 4.75 kbit/s
It also has a load of other fancy features that even inject a little 'white noise' into the connection to make it sound more comfortable during silences.
There is no voice system for LTE 4G at present, when you make a call / receive a call your phone actually drops back to 3G for the audio transmission.
I'm also not entirely sure what CODECs are used on CDMA networks like Verizon and Sprint, but I would suspect it's something similar and probably quite rate-adaptive too.
Most of the technology behind the stuff I'm describing above was developed by Ericsson, Nokia and NTT in Japan for UMTS (3G) and still forms the basis of most of those voice networks.
When it comes to mobiles though, poor reception, overloaded cells or your carrier just being mean with bandwidth can result in your calls sounding rather heavily compressed. So, it's quite hard to compare like with like as there are so many more variables.
But, it's probably why your voice calls on your mobile don't sound as nice as on your landline which has constant data rates and dedicated, reserved bandwidth for each call.