That was a pleasure to read: I'm just a layman, I know basically jack-shit about GSM/eSIM technologies, yet the article is written so well and provides enough details that I could understand what they wrote.
Fethbita 9 hours ago [-]
This is completely irresponsible behavior from Oracle as they put the whole eSIM ecosystem in danger by not fixing the issue.
lxgr 7 hours ago [-]
Without knowing the exact details, it seems to me like Oracle has a point here:
Java Card supports, broadly speaking, two types of bytecode verification: "On-card" and "off-card". On-card is secure against even malicious applets; off-card assumes that a trustworthy entity vets all applets before they are installed, and only signs them if they are deemed well-formed.
The off-card model just seems like a complete architectural mismatch for the eSIM use case, since there is no single trustworthy entity. SAT applets are not presented to the eUICC vendor for bytecode verification, so the entire security model breaks down if verification doesn't happen on-card.
Unfortunately, the GSMA eSIM specifications seem to be so generic that they don't even mandate Java as a bytecode technology, and accordingly don't impose any specific Java requirements, such as "all eUICC implementations supporting SAT via Java Card must not rely on off-card bytecode verification".
Fethbita 6 hours ago [-]
In this case if you read the last few sections, they reported several issues to Oracle regarding their JavaCard Reference Implementation, but these have not been fixed stating that they are not supposed to be used in production. Oracle has the responsibility to fix these issues as they are the primary source for everything related to JavaCard’s and other vendors take their reference implementation as a reference.
Also see their previous reply[1] to the findings this company had from 2019 and I can’t help but agree with the article that if those issues were fixed back then, there is a chance that this wouldn’t have happened today.
Definitely, no reference implementation should have security bugs.
But do you know if Oracle's reference implementation for Java Card is one using on-card or off-card verification, or more generally is assuming installs from only trusted sources?
There are many Java Card applications where the assumption of all bytecode being trusted is reasonable, especially if all bytecode comes from the issuer and post-issuance application loading isn't possible. Of course, that would be a complete mismatch for an eUICC.
Fethbita 5 hours ago [-]
It does not use on-card verification, because if it would have, the problem would not be present. You can check out their FAQ on the 2019 report[1].
Then I’d say this just points to a concerning lack of understanding of the security model on the implementer’s side.
In an ideal world, there would of course only be on-card verification, but resource constraints on smart card chips are still a factor.
In the second best of all worlds, Oracle would have one reference implementation each for trusted and for untrusted byte code, and a big bold disclaimer on when to use which, but I’m not convinced even that would prevent against all possible implementation mistakes.
gruez 4 hours ago [-]
>The off-card model just seems like a complete architectural mismatch for the eSIM use case, since there is no single trustworthy entity. SAT applets are not presented to the eUICC vendor for bytecode verification, so the entire security model breaks down if verification doesn't happen on-card.
I thought the whole esim provisioning process required a chain of trust all the way to GSMA? Maybe the applet isn't verified by the eUICC vendor, but it's not like you can run whatever code either.
ACCount36 4 hours ago [-]
Seems like you actually could "run whatever code".
Apparently, GSMA recalled their universal eSIM test profiles. Prior to recall, those could be installed on ANY eSIM, and those profiles had applet updates enabled.
By installing a profile to eSIM and issuing your own update to it, you could run arbitrary applets.
hhh 10 hours ago [-]
$30k is a pittance for the quality of this work
jeroenhd 9 hours ago [-]
With Oracle claiming it's not their problem to fix, and with the mobile networking industry generally being slow and old-fashioned, I'm surprised they even offered a bounty and worked with them for public disclosure rather than threatening to sue.
$30k is a pittance for the work put in if you were to negotiate with them as contractors, but it's still a good chunk of change for essentially unprompted, free work. They didn't need to pay them a dime, after all.
yapyap 9 hours ago [-]
I’ve been pretty disappointed with the seemingly small payouts some of the bug bounties I’ve seen submitted were / are getting.
It’s like the companies “forgot” [1] what happens when you don’t have a bug bounty program or what happens when people step to others with their bugs.
1. of course companies didnt forget, this is the benefit of the doubt I like to give but big companies like this aren’t stupid.
Big companies are ruled by the business savvy but less technical and those see things like the bug bounty program as not important, until they get shook awake by a huge breach or anything of the sort that impacts the stock price ( their salary ) I doubt they will care much.
exabrial 8 hours ago [-]
> knowledge of the keys is a primary requirement for target card compromise
Not claiming to be an expert, but this seems like a very big qualification. Can someone put this into context for me?
If you stole my private key for my PGP key, you would absolutely be able to sign messages as me.
lxgr 7 hours ago [-]
They were apparently able to extract an eUICC's private key:
> As a result of eUICC compromise, we were able to extract private ECC key for the certificate identifying target GSMA card.
This is supposed to be impossible, even with knowledge of SAT applet management keys. (In other words, individual eSIM profiles are still not supposed to be able to extract private eSIM provisioning keys from any eUICC.)
In the security architecture of eSIMs, compromising any eUICC's key means that an attacker can obtain the raw eSIM profile data from any SM-DP trusting it (which would be any, if it chains up to a CA part of the GSMA PKI) and do things that are supposed to be impossible, such as simultaneously installing one profile on multiple devices, or extract secret keys from a profile and then "put it back" to the SM-DP, let the legitimate user download it, and intercept their communications.
ImPostingOnHN 7 hours ago [-]
Let's assume I have the following philosophy:
My phone, my sim or esim, and anything else which I have purchased and is in my possession, belongs to me. Being able to retrieve keys to things I own, and do whatever I want with them, seems fine. If the key to my car says "do not duplicate", I should be nonetheless able to duplicate it, because I own the car and the key. If I want to run my same profile or eSIM on multiple devices, I get that the cell company doesn't like that, but I do, so I wouldn't consider that a harm to me.
Given that assumption, this vulnerability/jailbreak/rooting of something I own seems less significant to me. I think, however, that I may be misunderstanding the attack. Is this possible to perform against somebody else for whom I will never have physical possession of the phone? Or for someone else to perform it against me, without ever having physical possession of my phone? It sounded like maybe a test profile was left enabled, which allows anyone to send an SMS-PP message to any phone, telling it to install an applet which compromises the phone/eUICC/eSIM's keys. Did I follow that right?
miki123211 6 hours ago [-]
Theoretically, if one of the carriers you were using were to be hacked, the attackers could extract all your keys, including for other carrier profiles.
It's an interesting attack vector for intelligence agencies. Imagine you're going to China and install a Chinese eSim profile as secondary to get cheaper data. The Chinese govt, in collaboration with the carrier, could then use that profile to dump your American AT&T keys.
In the telecom world, there's no forward secrecy (there can't be with symmetric crypto, which is what it's all based on), so such an attack would let the Chinese intercept all your communications.
lxgr 6 hours ago [-]
That would indeed be catastrophic, but from the attack as demonstrated, I don't think we can conclude that that's possible.
As I understand it, the attack as demonstrated is extracting the eUICC provisioning private key from the context of a SAT applet, but what you're describing would be extracting the keys of eSIM profile A from the context of eSIM profile B of an unrelated carrier.
It would be great to know whether the researchers have looked into that, as it sounds like a much bigger problem if possible.
ImPostingOnHN 6 hours ago [-]
Thank you, your explanation helped me understand that the profile can itself be an application (and thus can be an exploit), and that different profiles/applications are not isolated from each other. I will be careful installing profiles from untrusted sources on my phone*.
Is there a remote attack vector against my phone/eSIM which doesn't require first compromising the network service provider? Not that I'm dismissing other vectors as unimportant, just trying to learn more.
* - I do realize that a network operator viewed as "trusted" may be untrusted under the right circumstances, like sufficient pressure from sufficiently official or powerful actors.
lxgr 6 hours ago [-]
> Let's assume I have the philosophy that my phone, my sim or esim, and anything else which I have purchased and is in my possession, belongs to me.
Then you can't use eSIMs as specified. eUICCs are an implementation of trusted computing.
> I think I may be misunderstanding the attack, though. Is this possible to perform against somebody else for whom I will never have physical possession of the phone? Or for someone else to perform it against me, without ever having physical possession of my phone?
In a non-broken eSIM security architecture, eSIM profiles are singletons, i.e. they can only be installed on any given eUICC at one time. At install time, the SM-DP decrements the logical "remaining installs" counter from 1 to 0; at uninstall time, it goes back up to 1. This of course only works if the eUICC's assertion of "I deleted eSIM profile x" is trustworthy, hence it requires trusted computing.
A different security architecture not relying on trusted computing is of course possible to imagine, but that's not what current networks assume.
ImPostingOnHN 6 hours ago [-]
> Then you can't use eSIMs as specified.
Could you elaborate here, please? I am kind of ignorant on this topic. Using an exploit to root my phone results in not using the phone "as specified", but it still works, and it's okay with me, because I own it.
It sounded like the concerns you had were ones the network operator should be concerned with. Suppose I don't care about their concerns unless they result in my stuff being compromised*.
Are you saying that it breaks the security in a way that someone who doesn't own my phone and doesn't have physical access to my phone can compromise my phone and/or my eSIM?
* - for the purposes of simplifying discussion, I'm dismissing the possibility that the network operator throws up their hands and entirely stops using/allowing eSIMs because they can't control everything
lxgr 6 hours ago [-]
The eSIM lives in dedicated, tamper-proof hardware inside your phone, separate from the application processor OS (which would be the domain of rooting) or often even the baseband. Under the eSIM security model, it holds keys that the device owner is not supposed to be able to extract, not even when they're willing to physically dismantle the chip holding it.
> Are you saying that it breaks the security in a way that someone who doesn't own my phone and doesn't have physical access to my phone can compromise my phone and/or my eSIM?
Yes, it does: Currently, providers assume that any eSIM honors the "singleton contract" described above. One that does not, e.g. one simulated in software using keys extracted from a physical trusted eUICC, could be used to mount the following attack:
1. Intercept the eSIM setup QR code (which contains two things: the URL of the SM-DP and a secret profile identifier)
2. Install the eSIM profile on their "software eSIM".
3. Report the eSIM as successfully deleted to the SM-DP, which now considers it available for installs again.
4. You, the legitimate owner of the eSIM, now install it on your unmodified eUICC in your phone and go about your day.
5. One day, ideally when your legitimate SIM is offline, the attacker inserts their eSIM into a phone and intercepts phone calls and SMS to your number, initiates expensive toll calls etc.
One solution here would be to never allow re-installs of the same eSIM profile, which some providers already do, but I personally don't like eSIM profiles managed that way, as it requires me to interact with carrier support and often even pay money just to transfer an eSIM to a new device.
ImPostingOnHN 6 hours ago [-]
Thank you for being patient with someone unfamiliar with this tech but nonetheless concerned with security.
> 1. Intercept the eSIM setup QR code (which contains two things: the URL of the SM-DP and a secret profile identifier)
1. How might this be done? Would this require a separate attack, or are the mechanics a part of the attack described in the article?
2. Is this possible with the eSIM already set up on my phone?
lxgr 6 hours ago [-]
How to steal the QR code is of scope of the attack described and is dependent on the security profile of any given carrier, but the important point is this:
Currently, an attacker installing the eSIM profile themselves is very visible, as it breaks the QR code for the legitimate user due to the singleton property (or, if the user installs it first, locks the attacker out anyway). If it happens, the legitimate user will call in and complain, and the carrier will at least revoke the current profile, and possibly even realize that something's afoot.
That property going away probably changes the threat model of most carriers in a way not initially anticipated.
ImPostingOnHN 1 hours ago [-]
Thank you. I think my confusion on concern stemmed from some opinions I held:
- Any key of mine should be copyable by me.
- Any key of mine (or copy thereof) should be usable as I see fit*.
- If someone had physical access to a device, we can assume they control it and have all information and communications on it, potentially forever due to the layered architecture of modern hardware systems.
- If someone compromised a network provider, we can assume they control all configuration and communication, potentially forever on the existing devices, for the same reasons.
* - though I obviously wouldn't want to use them in a way that illegally hurts people, like driving drunk and getting into an accident
daft_pink 5 hours ago [-]
Side note: Is China ever going to get esim?
ACCount36 6 hours ago [-]
Here's hoping for a public PoC for unpatched hardware. I've been looking for a way to dump eSIMs as plaintext for a long while now.
rs_rs_rs_rs_rs 2 hours ago [-]
Always good stuff from lsd-pl people
petesergeant 10 hours ago [-]
Whenever I read stuff about telecoms security, I realize the first few weeks of any serious war will just be complete loss of cell service.
Hojojo 9 hours ago [-]
Depends. Ukraine, despite some service interruptions, still largely has cell service:
I think these networks can be a lot more resilient than we think and they can be maintained even during a war.
grishka 8 hours ago [-]
Meanwhile in Russia, mobile data shutdowns are becoming a routine. Especially in regions closer to the border/front line. They say it's to fight drone attacks, but no word on how effective that actually is.
jeroenhd 8 hours ago [-]
The cell network is one of the best surveillance tools we've ever built as humanity, as well as a network of location beacons when over friendly territory. Taking it down would strongly limit the amount of information that can be gathered both passively and actively. Modern 5G networks can even act as radar.
A technically capable terrorist could wreak havoc if they could get access to the control center of a telecoms network, but I don't think service will be down for extended periods of times unless it's part of a scorched earth strategy of some kind. Any military operation can be disrupted easily with cheap and widely available jammers anyway, attacking cellular infrastructure is mostly useful for attacking civilian targets and spreading panic.
dylan604 6 hours ago [-]
And yet here I sit at my desk in my home with 1 bar of service, and I think that's only because 0 bars is not possible. It's not like they'd have to do much to disrupt cell service
toast0 5 hours ago [-]
Depends on your phone and coverage. I've seen zero bars and apparently connected. More often I see zero bars and not connected, usually has a line through the signal indicator.
Where I live, there's resistance to adding new towers, so our dead zones are pretty consistent. One part of town has very spotty coverage from all networks, but has some wifi that works a bit. Otherwise, there's a couple places where network B has no coverage, and others where network C doesn't. Last I tried, network A was hopeless at my house, but I assume it still has holes in the coverage.
exabrial 8 hours ago [-]
And you won’t be able to drive your cell network connected car… making logistics impossible. It’s a big enough wartime issue there ought to be a regulation that the cell device should be able to be “pulled” and the car defaults to “fully enabled”.
frickinLasers 7 hours ago [-]
Do you have examples of cars (that aren't Teslas, perhaps, since they don't play by normal car rules) having been disabled due to lack of cell service?
exabrial 5 hours ago [-]
Not to avoid the question, because I simply don't know, but do you (or anyone) have directions on how to yank a cell module from a list of cars and still have the car function?
Many cars have something similar (remove SIM card, cut antenna) that allows them to keep working without connectivity
ChocolateGod 9 hours ago [-]
If things like Starlink Cellular work properly, will probably help prevent that.
dylan604 6 hours ago [-]
Doesn't Starlink depend on ground stations? So toss a couple of missiles at those ground stations, and Starlink isn't as useful.
jeroenhd 8 hours ago [-]
Depends on who the invader is. If it's America going after yet another country in the Middle East, Starlink Cellular certainly won't help.
anonymars 8 hours ago [-]
> Depends on who the invader is
And probably how Elon is feeling that day about the participants
lxgr 7 hours ago [-]
Why would Starlink be more resilient against hacks than ground-based LTE?
dfox 7 hours ago [-]
It is my understanding that Starlink does not do home-grown crypto of the 3GPP-kind. Also because it is closed ecosystem there is no need for SIMs and the associated deployment mechanisms.
lxgr 6 hours ago [-]
Their "direct to cell" service is a regular LTE network, so it must be using all the same protocols in order to be compatible with existing devices.
Java Card supports, broadly speaking, two types of bytecode verification: "On-card" and "off-card". On-card is secure against even malicious applets; off-card assumes that a trustworthy entity vets all applets before they are installed, and only signs them if they are deemed well-formed.
The off-card model just seems like a complete architectural mismatch for the eSIM use case, since there is no single trustworthy entity. SAT applets are not presented to the eUICC vendor for bytecode verification, so the entire security model breaks down if verification doesn't happen on-card.
Unfortunately, the GSMA eSIM specifications seem to be so generic that they don't even mandate Java as a bytecode technology, and accordingly don't impose any specific Java requirements, such as "all eUICC implementations supporting SAT via Java Card must not rely on off-card bytecode verification".
Also see their previous reply[1] to the findings this company had from 2019 and I can’t help but agree with the article that if those issues were fixed back then, there is a chance that this wouldn’t have happened today.
[1]: https://www.securityweek.com/oracle-gemalto-downplay-java-ca...
But do you know if Oracle's reference implementation for Java Card is one using on-card or off-card verification, or more generally is assuming installs from only trusted sources?
There are many Java Card applications where the assumption of all bytecode being trusted is reasonable, especially if all bytecode comes from the issuer and post-issuance application loading isn't possible. Of course, that would be a complete mismatch for an eUICC.
[1]: https://security-explorations.com/java-card.html#faq
Then I’d say this just points to a concerning lack of understanding of the security model on the implementer’s side.
In an ideal world, there would of course only be on-card verification, but resource constraints on smart card chips are still a factor.
In the second best of all worlds, Oracle would have one reference implementation each for trusted and for untrusted byte code, and a big bold disclaimer on when to use which, but I’m not convinced even that would prevent against all possible implementation mistakes.
I thought the whole esim provisioning process required a chain of trust all the way to GSMA? Maybe the applet isn't verified by the eUICC vendor, but it's not like you can run whatever code either.
Apparently, GSMA recalled their universal eSIM test profiles. Prior to recall, those could be installed on ANY eSIM, and those profiles had applet updates enabled.
By installing a profile to eSIM and issuing your own update to it, you could run arbitrary applets.
$30k is a pittance for the work put in if you were to negotiate with them as contractors, but it's still a good chunk of change for essentially unprompted, free work. They didn't need to pay them a dime, after all.
It’s like the companies “forgot” [1] what happens when you don’t have a bug bounty program or what happens when people step to others with their bugs.
1. of course companies didnt forget, this is the benefit of the doubt I like to give but big companies like this aren’t stupid. Big companies are ruled by the business savvy but less technical and those see things like the bug bounty program as not important, until they get shook awake by a huge breach or anything of the sort that impacts the stock price ( their salary ) I doubt they will care much.
Not claiming to be an expert, but this seems like a very big qualification. Can someone put this into context for me?
If you stole my private key for my PGP key, you would absolutely be able to sign messages as me.
> As a result of eUICC compromise, we were able to extract private ECC key for the certificate identifying target GSMA card.
This is supposed to be impossible, even with knowledge of SAT applet management keys. (In other words, individual eSIM profiles are still not supposed to be able to extract private eSIM provisioning keys from any eUICC.)
In the security architecture of eSIMs, compromising any eUICC's key means that an attacker can obtain the raw eSIM profile data from any SM-DP trusting it (which would be any, if it chains up to a CA part of the GSMA PKI) and do things that are supposed to be impossible, such as simultaneously installing one profile on multiple devices, or extract secret keys from a profile and then "put it back" to the SM-DP, let the legitimate user download it, and intercept their communications.
My phone, my sim or esim, and anything else which I have purchased and is in my possession, belongs to me. Being able to retrieve keys to things I own, and do whatever I want with them, seems fine. If the key to my car says "do not duplicate", I should be nonetheless able to duplicate it, because I own the car and the key. If I want to run my same profile or eSIM on multiple devices, I get that the cell company doesn't like that, but I do, so I wouldn't consider that a harm to me.
Given that assumption, this vulnerability/jailbreak/rooting of something I own seems less significant to me. I think, however, that I may be misunderstanding the attack. Is this possible to perform against somebody else for whom I will never have physical possession of the phone? Or for someone else to perform it against me, without ever having physical possession of my phone? It sounded like maybe a test profile was left enabled, which allows anyone to send an SMS-PP message to any phone, telling it to install an applet which compromises the phone/eUICC/eSIM's keys. Did I follow that right?
It's an interesting attack vector for intelligence agencies. Imagine you're going to China and install a Chinese eSim profile as secondary to get cheaper data. The Chinese govt, in collaboration with the carrier, could then use that profile to dump your American AT&T keys.
In the telecom world, there's no forward secrecy (there can't be with symmetric crypto, which is what it's all based on), so such an attack would let the Chinese intercept all your communications.
As I understand it, the attack as demonstrated is extracting the eUICC provisioning private key from the context of a SAT applet, but what you're describing would be extracting the keys of eSIM profile A from the context of eSIM profile B of an unrelated carrier.
It would be great to know whether the researchers have looked into that, as it sounds like a much bigger problem if possible.
Is there a remote attack vector against my phone/eSIM which doesn't require first compromising the network service provider? Not that I'm dismissing other vectors as unimportant, just trying to learn more.
* - I do realize that a network operator viewed as "trusted" may be untrusted under the right circumstances, like sufficient pressure from sufficiently official or powerful actors.
Then you can't use eSIMs as specified. eUICCs are an implementation of trusted computing.
> I think I may be misunderstanding the attack, though. Is this possible to perform against somebody else for whom I will never have physical possession of the phone? Or for someone else to perform it against me, without ever having physical possession of my phone?
In a non-broken eSIM security architecture, eSIM profiles are singletons, i.e. they can only be installed on any given eUICC at one time. At install time, the SM-DP decrements the logical "remaining installs" counter from 1 to 0; at uninstall time, it goes back up to 1. This of course only works if the eUICC's assertion of "I deleted eSIM profile x" is trustworthy, hence it requires trusted computing.
A different security architecture not relying on trusted computing is of course possible to imagine, but that's not what current networks assume.
Could you elaborate here, please? I am kind of ignorant on this topic. Using an exploit to root my phone results in not using the phone "as specified", but it still works, and it's okay with me, because I own it.
It sounded like the concerns you had were ones the network operator should be concerned with. Suppose I don't care about their concerns unless they result in my stuff being compromised*.
Are you saying that it breaks the security in a way that someone who doesn't own my phone and doesn't have physical access to my phone can compromise my phone and/or my eSIM?
* - for the purposes of simplifying discussion, I'm dismissing the possibility that the network operator throws up their hands and entirely stops using/allowing eSIMs because they can't control everything
> Are you saying that it breaks the security in a way that someone who doesn't own my phone and doesn't have physical access to my phone can compromise my phone and/or my eSIM?
Yes, it does: Currently, providers assume that any eSIM honors the "singleton contract" described above. One that does not, e.g. one simulated in software using keys extracted from a physical trusted eUICC, could be used to mount the following attack:
1. Intercept the eSIM setup QR code (which contains two things: the URL of the SM-DP and a secret profile identifier)
2. Install the eSIM profile on their "software eSIM".
3. Report the eSIM as successfully deleted to the SM-DP, which now considers it available for installs again.
4. You, the legitimate owner of the eSIM, now install it on your unmodified eUICC in your phone and go about your day.
5. One day, ideally when your legitimate SIM is offline, the attacker inserts their eSIM into a phone and intercepts phone calls and SMS to your number, initiates expensive toll calls etc.
One solution here would be to never allow re-installs of the same eSIM profile, which some providers already do, but I personally don't like eSIM profiles managed that way, as it requires me to interact with carrier support and often even pay money just to transfer an eSIM to a new device.
> 1. Intercept the eSIM setup QR code (which contains two things: the URL of the SM-DP and a secret profile identifier)
1. How might this be done? Would this require a separate attack, or are the mechanics a part of the attack described in the article?
2. Is this possible with the eSIM already set up on my phone?
Currently, an attacker installing the eSIM profile themselves is very visible, as it breaks the QR code for the legitimate user due to the singleton property (or, if the user installs it first, locks the attacker out anyway). If it happens, the legitimate user will call in and complain, and the carrier will at least revoke the current profile, and possibly even realize that something's afoot.
That property going away probably changes the threat model of most carriers in a way not initially anticipated.
- Any key of mine should be copyable by me.
- Any key of mine (or copy thereof) should be usable as I see fit*.
- If someone had physical access to a device, we can assume they control it and have all information and communications on it, potentially forever due to the layered architecture of modern hardware systems.
- If someone compromised a network provider, we can assume they control all configuration and communication, potentially forever on the existing devices, for the same reasons.
* - though I obviously wouldn't want to use them in a way that illegally hurts people, like driving drunk and getting into an accident
https://www.euronews.com/next/2024/03/25/ukraines-telecom-en...
I think these networks can be a lot more resilient than we think and they can be maintained even during a war.
A technically capable terrorist could wreak havoc if they could get access to the control center of a telecoms network, but I don't think service will be down for extended periods of times unless it's part of a scorched earth strategy of some kind. Any military operation can be disrupted easily with cheap and widely available jammers anyway, attacking cellular infrastructure is mostly useful for attacking civilian targets and spreading panic.
Where I live, there's resistance to adding new towers, so our dead zones are pretty consistent. One part of town has very spotty coverage from all networks, but has some wifi that works a bit. Otherwise, there's a couple places where network B has no coverage, and others where network C doesn't. Last I tried, network A was hopeless at my house, but I assume it still has holes in the coverage.
Many cars have something similar (remove SIM card, cut antenna) that allows them to keep working without connectivity
And probably how Elon is feeling that day about the participants