Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Showing posts with label Industry 4.0. Show all posts
Showing posts with label Industry 4.0. Show all posts

Tuesday, October 10, 2023

Overlapping private networks - an emerging challenge for spectrum management

This post originally appeared in September 2023 on my LinkedIn feed, which is now my main platform for both short posts and longer-form articles. It can be found here, along with the comment stream. Please follow / connect to me on LinkedIn, to receive regular updates (about 1-3 / week)

File this one under “high quality problems”!
 
We’re starting to see a trend towards multiple enterprise private 5G networks on the same site, or very close to each other. That has a lot of implications.

Various large campus-style environments such as ports, airports and maybe business parks, industrial zones and others in future, will need to deal with the coexistence of several company-specific #5G networks.

For instance, an airport might have different networks deployed at the gates for aircraft turnaround, in the baggage-handling area for machinery, across the ramp area for vehicles, in the terminals for neutral host access, and in maintenance hangars for IoT and AR/VR.

Importantly, these may be deployed, owned and run by *different* companies - the airport authority, airlines, baggage handlers and a contracted indoor service provider, perhaps. In addition there could be other nearby private networks outside the airport fence, for hotels, warehouses and car parks.

This is something I speculated about a few years ago (I dug out the slide below from early 2020), but it is now starting to become a reality.

This is likely to need some clever coordination in terms of #spectrum management, as well as other issues such as roaming/interconnect and perhaps numbering resources such as MNC codes as well. It may need new forms of #neutralhost or multi-tenant setups.
 
Yesterday I attended a workshop run by the UK’s UK Spectrum Policy Forum. While the main focus was on the 3.8-4.2GHz band and was under Chatham House rule (so I can't cover the specifics), one speaker has allowed me to discuss his comments directly.

Koen Mioulet from European private network association EUWENA gave an example of the Port of Rotterdam, which has 5 different terminals, 3000 businesses including large facilities run by 28 different chemical companies. It already has two #PrivateLTE networks, and 5G used on a "container exchange route" for vehickes, plus more possible networks on ships themselves. It is quite possible to imagine 10+ overlapping networks in future.
 
While the UK has 400MHz potentially available in 3.8-4.2GHz, some countries only have 50-100MHz for P5G. That would pose significant coordination challenges and may necessitate an "umbrella" network run by (in this case) the Port Authority or similar organisation. An added complexity is synchronisation, especially if each network is set up for different uplink/downlink splits for specific applications.

MNOs could be involved too, in roles from wholesale provision, down to just spectrum leasing. Whatever happens, regulators and others need to start thinking about this.

In the past I’ve half-jokingly suggested that a new 6G target metric should be to have “1000 networks per sq km” rather than the usual “million devices per sq km” or similar.

Maybe we should start with 10 or 100 nearby networks, but that joke is now looking like a real problem, albeit a healthy one for the private cellular industry.
 

 

Tuesday, September 15, 2020

Low-latency and 5G URLLC - A naked emperor?

Originally published as a LinkedIn Newsletter Article - see here

I think the low-latency 5G Emperor is almost naked. Not completely starkers, but certainly wearing some unflattering Speedos.

Much of the promise around the 5G – and especially the “ultra-reliable low-latency” URLLC versions of the technology – centres on minimising network round-trip times, for demanding applications and new classes of device.


 

Edge-computing architectures like MEC also often focus on latency as a key reason for adopting regional computing facilities - or even servers at the cell-tower. Similar justifications are being made for LEO satellite constellations.

The famous goal of 1 millisecond time is often mentioned, usually in the context of applications like autonomous vehicles with snappy responses, AR/VR headsets without nausea, cloud-gaming, the “tactile Internet” and remote drone/robot control.

(In theory this is for end-to-end "user plane latency" between the user and server, so includes both the "over the air" radio and the backhaul / core network parts of the system. This is also different to a "roundtrip", which is there-and-back time).

Usually, that 1ms objective is accompanied by some irrelevant and inaccurate mention of 20 or 50 billion connected devices by [date X], and perhaps some spurious calculation of trillions of dollars of (claimed) IoT-enabled value. Gaming usually gets a mention too.

I think there are two main problems here:

  • Supply: It’s not clear that most 5G networks and edge-compute will be able to deliver 1ms – or even 10ms – especially over wide areas, or for high-throughput data.
  • Demand: It’s also not clear there’s huge value & demand for 1ms latency, even where it can be delivered. In particular, it’s not obvious that URLLC applications and services can “move the needle” for public MNOs’ revenues.

Supply

Delivering URLLC requires more than just “network slicing” and a programmable core network with a “slicing function”, plus a nearby edge compute node for application-hosting and data processing, whether that in the 5G network (MEC or AWS Wavelength) or some sort of local cloud node like AWS Outpost. That low-latency slice needs to span the core, the transport network and critically, the radio.

Most people I speak to in the industry look through the lens of the core network slicing or the edge – and perhaps IT systems supporting the 5G infrastructure. There is also sometimes more focus on the UR part than the LL, which actually have different enablers.

Unfortunately, it looks to me as though the core/edge is writing low-latency checks that the radio can’t necessarily cash.

Without going into the abstruse nature of radio channels and frame-structure, it’s enough to note that ultra-low latency means the radio can’t wait to bundle a lot of incoming data into a packet, and then get involved in to-and-fro negotiations with the scheduling system over when to send it.

Instead, it needs to have specific (and ideally short) timed slots in which to transmit/receive low-latency data. This means that it either needs to have lots of capacity reserved as overhead, or the scheduler has to de-prioritise “ordinary” traffic to give “pre-emption” rights to the URLLC loads. Look for terms like Transmission Time Interval (TTI) and grant-free UL transmission to drill into this in more detail.

It’s far from clear that on busy networks, with lots of smartphone or “ordinary” 5G traffic, there can always be a comfortable coexistence of MBB data and more-demanding URLLC. If one user gets their 1ms latency, is it worth disrupting 10 – or 100 – users using their normal applications? That will depend on pricing, as well as other factors.

This gets even harder where the spectrum used is a TDD (time-division duplexing) band, where there’s also another timeslot allocation used for separating up- and down-stream data. It’s a bit easier in FDD (frequency-division) bands, where up- and down-link traffic each gets a dedicated chunk of spectrum, rather than sharing it.

There’s another radio problem here as well – spectrum license terms, especially where bands are shared in some fashion with other technologies and users. For instance, the main “pioneer” band for 5G in much of the world is 3.4-3.8GHz (which is TDD). But current rules – in Europe, and perhaps elsewhere - essentially prohibit the types of frame-structure that would enable URLLC services in that band. We might get to 20ms, or maybe even 10-15ms if everything else stacks up. But 1ms is off the table, unless the regulations change. And of course, by that time the band will be full of smartphone users using lots of ordinary traffic. There maybe some Net Neutrality issues around slicing, too.

There's a lot of good discussion - some very technical - on this recent post and comment thread of mine: https://www.linkedin.com/posts/deanbubley_5g-urllc-activity-6711235588730703872-1BVn

Various mmWave bands, however, have enough capacity to be able to cope with URLLC more readily. But as we already know, mmWave cells also have very short range – perhaps just 200 metres or so. We can forget about nationwide – or even full citywide – coverage. And outdoor-to-indoor coverage won’t work either. And if an indoor network is deployed by a 3rd party such as neutral host or roaming partner, it's far from clear that URLLC can work across the boundary.

Sub-1GHz bands, such as 700MHz in Europe, or perhaps refarmed 3G/4G FDD bands such as 1.8GHz, might support URLLC and have decent range/indoor reach. But they’ll have limited capacity, so again coexistence with MBB could be a problem, as MNOs will also want their normal mobile service to work (at scale) indoors and in rural areas too.

What this means is that we will probably get (for the forseeable future):

  • Moderately Low Latency on wide-area public 5G Networks (perhaps 10-20ms), although where network coverage forces a drop back to 4G, then 30-50ms.
  • Ultra* Low Latency on localised private/enterprise 5G Networks and certain public hotspots (perhaps 5-10ms in 2021-22, then eventually 1-3ms maybe around 2023-24, with Release 17, which also supports deterministic "Time Sensitive Networking" in devices)
  • A promised 2ms on Wi-Fi6E, when it gets access to big chunks of 6GHz spectrum

This really isn't ideal for all the sci-fi low-latency scenarios I hear around drones, AR games, or the cliched surgeon performing a remote operation while lying on a beach. (There's that Speedo reference, again).

* see the demand section below on whether 1-10ms is really "ultra-low" or just "very low" latency

Demand

Almost 3 years ago, I wrote an earlier article on latency (link), some of which I'll repeat here. The bottom line is that it's not clear that there's a huge range of applications and IoT devices that URLLC will help, and where they do exist they're usually very localised and more likely to use private networks rather than public.

One paragraph I wrote stands out:

I have not seen any analysis that tries to divide the billions of devices, or trillions of dollars, into different cohorts of time-sensitivity. Given the assumptions underpinning a lot of 5G business cases, I’d suggest that this type of work is crucial. Some of these use-cases are slow enough that sending data by 2G is fine (or by mail, in some cases!). Others are so fast they’ll need fibre – or compute capability located locally on-device, or even on-chip, rather than in the cloud, even if it’s an “edge” node.

I still haven't seen any examples of that analysis. So I've tried to do a first pass myself, albeit using subjective judgement rather than hard data*. I've put together what I believe is the first attempted "heatmap" for latency value. It includes both general cloud-compute and IoT, both of which are targeted by 5G and various forms of edge compute. (*get in touch if you'd like to commission me to do a formal project on this)

A lot of the IoT examples I hear about are either long time-series collections of sensor data (for asset performance-management and predictive maintenance), or have fairly loose timing constraints. A farm’s moisture sensors and irrigation pumps don’t need millisecond response times. Conversely, a chemical plant may need to alter measure and alter pressures or flows in microseconds.

I've looked at time-ranges for latency from microseconds to days, spanning 12 orders of magnitude (see later section for more examples). As I discuss below, not everything hinges on the most-mentioned 1-100 millisecond range, or the 3-30ms subset of that that 5G addresses.

I've then compared those latency "buckets" with distances from 1m to 1000km - 7 orders of magnitude. I could have gone out to geostationary satellites, and down to chip scales, but I'll leave that exercise to the reader.

  

The question for me is - are the three or four "battleground" blocks really that valuable? Is the 2-dimensional Goldilocks zone of not-too-distant / not-too-close and not-too-short / not-too long, really that much of a big deal?

And that's without considering the third dimension of throughput rate. It's one thing having a low-latency "stop the robot now!" message, but quite another doing hyper-realistic AR video for a remote-controlled drone or a long session of "tactile Internet" haptics for a game, played indoors at the edge of a cell.

If you take all those $trillions that people seem to believe are 5G-addressable, what % lies in those areas of the chart? And what are the sensitivities to to coverage and pricing, and what substitute risks apply - especially private networks rather than MNO-delivered "slices" that don't even exist yet?

Examples

Here are some more examples of timing needs for a selection of applications and devices. Yes, we can argue some of them, but that's not the point - it's that this supposed magic range of 1-100 milliseconds is not obviously the source of most "industry transformation" or consumer 5G value:

  • Sensors on an elevator doors may send sporadic data, to predict slowly-worsening mechanical problems – so an engineer might be sent a month before the normal maintenance visit. Similarly, sensors monitoring a building’s structural condition, vegetation cover in the Amazon, or oceanic acidity isn’t going to shift much month-by-month.
  • A car might download new engine-management software once a week, and upload traffic observations and engine-performance data once a day (maybe waiting to do it over WiFi, in the owner’s garage, as it's not time-critical).
  • A large oil storage tank, or a water well, might have a depth-gauge giving readings once an hour.
  • A temperature sensor and thermostat in an elderly person’s home, to manage health and welfare, might track readings and respond with control messages every 10 minutes. Room temperatures change only slowly.
  • A shared bicycle might report its position every minute – and unlock in under 10 seconds when the user buys access with their smartphone app
  • A payment or security-access tag should check identity and open a door, or confirm a transaction, in a second or two.
  • Voice communication seems laggy with anything longer than 200 millisecond latency.
  • A networked video-surveillance system may need to send a facial image, and get a response in 100ms, before the person of interest moves out of camera-shot.
  • An online video-game ISP connection will be considered “low ping” at maybe 50ms latency.
  • A doctor’s endoscope or microsurgery tool might need to respond to controls (and send haptic feedback) 100 times a second – ie every 10ms
  • Teleprotection systems for high-voltage utility grids can demand 6-10ms latency times
  • A rapidly-moving drone may need to react in 2-3 millisecond to a control signal, or a locally-recognised risk.
  • A sensitive industrial process-control system may need to be able to respond in 10s or 100s of microseconds to avoid damage to finely-calibrated machinery
  • Image sensors and various network sync mechanisms may require response times measured in nanoseconds
  • Photon sensors for various scientific uses may operate at picosecond durations
  • Ultra-fast laser pulses for machining glass or polymers can be measured in femtoseconds

Conclusion

Latency is important, for application developers, enterprises and many classes of IoT device and solution. But we have been spectacularly vague at defining what "low-latency" actually means, and where it's needed.

A lot of what gets discussed in 5G and edge-computing conferences, webinars and marketing documents is either hyped, or is likely to remain undeliverable. A lot of the use-cases can be adequately serviced with 4G mobile, Wi-Fi - or a person on a bicycle delivering a USB memory stick.

What is likely is that average latencies will fall with 5G. An app developer that currently expects a 30-70ms latency on 4G (or probably lower on Wi-Fi) will gradually adapt to 20-40ms on mostly-5G networks and eventually 10-30ms. If it's a smartphone app, they likely won't use URLLC anyway.

Specialised IoT developers in industrial settings will work with specialist providers (maybe MNOs, maybe fully-private networks and automation/integration firms) to hit more challenging targets, where ROI or safety constraints justify the cost. They may get to 1-3ms at some point in the medium term, but it's far from clear they will be contributing massively to MNOs or edge-providers' bottom lines.

As for wide-area URLLC? Haptic gaming from the sofa on 5G, at the edge of the cell? Remote-controlled drones with UHD cameras? Two cars approaching each other on a hill-crest on a country road? That's going to be a challenge for both demand and supply.

Saturday, August 08, 2020

A rant about 5G myths - chasing unicorns​

Exasperated rant & myth-busting time.

I actually got asked by a non-tech journalist recently "will 5G change our lives?"

Quick answer: No. Emphatically No.


#5G is Just Another G. It's not a unicorn

Yes, 5G is an important upgrade. But it's also *massively* overhyped by the mobile industry, by technology vendors, by some in government, and by many business and technology journalists.

- There is no "race to 5G". That's meaningless geopolitical waffle. Network operators are commercial organisations and will deploy networks when they see a viable market, or get cajoled into it by the terms & timing of spectrum licenses.

- Current 5G is like 4G, but faster & with extra capacity. Useful, but not world-changing.

- Future 5G will mean better industrial systems and certain other cool (but niche) use-cases.

- Most 5G networks will be very patchy, without ubiquitous coverage, except for very rudimentary performance. That means 5G-only applications will be rare - developers will have to assume 4G fallback (& WiFi) are common, and that dead-spots still exist.

- Lots of things get called 5G, but actually aren't 5G. It's become a sort of meaningless buzzword for "cool new wireless stuff", often by people who couldn't describe the difference between 5G, 4G or a pigeon carrying a message.

- Anyone who talks about 5G being essential for autonomous cars or remote surgery is clueless. 5G might get used in connected vehicles (self-driving or otherwise) if it's available and cheap, but it won't be essential - not least as it won't work everywhere (see above).

- Yes, there will be a bit more fixed wireless FWA broadband with 5G. But no, it's not replacing fibre or cable for normal users, especially in competitive urban markets. It'll help take FWA from 5% to 10-12% of global home broadband lines.

- The fact the 5G core is "a cloud-native service based architecture" doesn't make it world-changing. It's like raving about a software-defined heating element for your toaster. Fantastic for internal flexibility. But we expect that of anything new, really. It doesn't magically turn a mobile network into a "platform". Nor does it mean it's not Just Another G.

- No, enterprises are not going to "buy a network slice". The amount of #SliceWash I'm hearing is astonishing. It's a way to create some rudimentary virtualised sub-networks in 5G, but it's not a magic configurator for 100s or 1000s of fine-grained, dynamically-adjusted different permutations all coexisting in harmony. The delusional vision is very far removed from the mundane reality.

- The more interesting stuff in 5G happens in Phase 2/3, when 3GPP Release 16 & then Release 17 are complete, commercialised & common. R16 has just been finalised. From 2023-4 onward we should expect some more massmarket cool stuff, especially for industrial use. Assuming the economy recovers by then, that is.

- Ultra-reliable low-latency communications (URLLC) sounds great, but it's unclear there's a business case except at very localised levels, mostly for private networks. Actually, UR and LL are two separate things anyway. MNOs aren't going to be able sell reliability unless they also take legal *liability* if things go wrong. If the robot's network goes down and it injures a worker, is the telco CEO going to take the rap in court?

- Getting high-performance 5G working indoors will be very hard, need dedicated systems, and will take lots of time, money and trained engineers. It'll be a decade or longer before it's very common in public buildings - especially if it has to support mmWave and URLLC. Most things like AR/VR will just use Wi-Fi. Enterprises may deploy 5G in factories or airport hangars or mines - but will engineer it very carefully, examine the ROI - and possibly work with a specialist provider rather than a telco.

- #mmWave 5G is even more overhyped than most aspects. Yes, there's tons of spectrum and in certain circumstances it'll have huge speed and capacity. But it's go short range and needs line-of-sight. Outdoor-to-indoor coverage will be near zero. Having your back to a cell-site won't help. It will struggle to go through double-glazed windows, the shell of a car or train, and maybe even your bag or pocket. Extenders & repeaters will help, but it's going to be exceptionally patchy (and need tons of fibre everywhere for backhaul).

- 5G + #edgecomputing is a not going to be a big deal. If low-latency connections were that important, we'd have had localised *fixed* edge computing a decade ago, as most important enterprise sites connect with fibre. There's almost no FEC, so MEC seems implausible except for niches. And even there, not much will happen until there's edge federation & interconnect in place. Also, most smartphone-type devices will connect to someone else's WiFi between 50-80% of the time, and may have a VPN which means the network "egress" is a long way from the obvious geographically-proximal edge.

- Yes, enterprise is more important in 5G. But only for certain uses. A lot can be done with 4G. "Verticals" is a meaningless term; think about applications.

- No, it won't displace Wi-Fi. Obviously. I've been through this multiple times.

- No, all laptops won't have 5G. (As with 3G and 4G. Same arguments).

- No, 5G won't singlehandedly contribute $trillions to GDP. It's a less-important innovation area than many other things, such as AI, biotech, cloud, solar and probably quantum computing and nuclear fusion. So unless you think all of those will generate 10's or 100's of $trillions, you've got the zeros wrong.

- No, 5G won't fry your brain, or kill birds, or give you a virus. Conspiracy theorists are as bad as the hypesters. 5G is neither Devil nor Deity. It's just an important, but ultimately rather boring, upgrade.

There's probably a ton more 5G fallacies I've forgotten, and I might edit this with a few extra ones if they occur to me. Feel free to post comments here, although the majority of debate is on my LinkedIn version of this post (here). This is also the inaugural post for a new LinkedIn newsletter, Most of my stuff is not quite this snarky, but it depends on my mood. I'm @disruptivedean on Twitter so follow me there too.

If you like my work, and either need a (more sober) business advisory session or workshop, let me know. I'm also a frequent speaker, panellist and moderator for real and virtual events.

Just remember: #5GJAG. Just Another G.

Monday, February 24, 2020

3rd Neutral Host Workshop + OpenRAN for shared networks. Early bird still available


NOTE: Owing to uncertainty around the impact of Coronavirus on travel, event attendance, company policies & venues, this workshop has been postponed from 31st March until 7th July. We have contacted existing registered attendees to discuss the options

On March 31st  July 7th I'll be running my 3rd public workshop on Neutral Host Networks in central London, together with colleague Peter Curnow-Ford.

As well as covering the basics of new wholesale/sharing models for MNOs, both with and without dedicated spectrum, we will also be looking more closely at the fit between NHNs and new virtualised vRAN / OpenRAN technologies. 

We'll cover all the various use-cases: metro-area network densification, indoor systems for various venues, road/rail coverage, rural wholesale models, FWA and more. 

The links (and differences) between neutral-host and private LTE/5G will be discussed, as well as alternative models such as multi-MNO sharing or national roaming. (see this post for some previous thoughts on this)

Different countries' competitive, regulatory and spectrum positions will be covered, to assess how that will impact the evolution of NHNs. 
 
Early bird pricing is available before June 7th.

Full details and registration are available here

Monday, January 07, 2019

Private cellular networks - why Ofcom's UK spectrum proposals are so innovative

On December 18th 2018, Ofcom announced two consultations about new 5G-oriented spectrum releases (link), and potential new models for spectrum-sharing, rural mobile coverage and related innovation (link). 

I've already commented briefly on Twitter (link) and LinkedIn (link), but it's worth going a bit deeper in a full post on this - particularly on the aspects relating to private networks and spectrum-sharing.

NOTE: this is a long post. Get a coffee now. Or listen to my audio commentary (Part 1 on the background to private mobile networks is here and Part 2 on the Ofcom proposals is here)

My view is that 2019 is a key breakout year for new mobile network ownership and business models - whether that's fully-private enterprise networks, various types of neutral-host, or a revitalised version of MVNO-type wholesale perhaps enriched by network-slicing. 

This trend touches everything from IoT to 5G verticals, to enterprise voice/comms & UCaaS. I'll be covering it in depth. I also discussed it when I presented to Ofcom's technology team in November (see slides halfway down this page), and it's good to see my thinking seems to align fairly closely with theirs.


This was the future, long ago

Localised or private cellular networks - sometimes called Micro-MNOs - are not a new concept.

Twelve years ago, in 2006, the UK telecoms regulator Ofcom made an unusual decision - to auction off a couple of small slices* of 2G mobile spectrum, for use on a low-power, localised basis for a number of innovative service providers or private companies' use. (Link). A few launches occurred, and the Dutch regulator later did something similar, but it didn't really herald a sudden flourishing of private mobile networks. 

*(The slices were known as the DECT Guard Bands, which separated GSM mobile bands from those used for older cordless phones, widely used in homes and businesses)

Numerous practical glitches were blamed, including the costs / complexities involved in deploying small-cells, the need for roaming or MVNO deals for wide-area coverage, and the fact that the spectrum was mostly suitable for voice calls, at a time when the world was moving to mobile data and smartphones. 

Unfortunately, there was also no real international momentum or consensus on the concept, despite Ofcom's hope to set a trend - although it did catalyse a good UK-based cottage industry of small-cell and niche core-network vendors.


Going mainstream: private / virtualised networks for enterprise & verticals

At the start of 2019, the world looks very different. There is a broad consensus that new models of mobile network are needed - whether that is fully-owned private cellular, more-sophisticated MVNOs with their own core networks, or future visions of 5G with privately-run "network slices". 


There's a focus on neutral-host networks for in-building coverage, proponents of wholesale national "open" networks, and a growing number of large non-telecoms enterprises wanting more control and ownership.




It is unrealistic to expect the main national MNOs to be able to pay for, deploy, customise, integrate and operate networks for every industry vertical, indoor location or remote area. They have constraints on capital, personnel, management resource, specialised knowledge and appetite for risk. Other types of network operator or service provider are needed as well.

In a nutshell, there is a wide recognition that "telecoms is too important to just leave up to the telcos".

I've been talking about this for several years now - the rise of unlicensed cellular technologies such as MulteFire or Huawei's eLTE, the growing focus on locally-licensed or shared spectrum for IoT or industry use, and the specific demands of rural, indoor or industrial network coverage and business models.

(As well as non-MNO deployed and owned 4G/5G networks, we will also see a broad range of other ways to deliver private capabilities, including various evolutions of MVNO, mobile SD-WAN and future network-slicing and private cores. But this particular consultation is more about the radio-centric innovations).


Where is the action?

But while there has been a lot of discussion in the UK (including my own presentations to Ofcom, the Spectrum Policy Forum and others), the main sources of action on private (licensed) cellular have been elsewhere. 

In particular, the US push on its CBRS 3-tier model of network sharing - expected to yield the first local service launches in 2019 - and German and Dutch approaches to local-licensed spectrum for industry, have been notable. Unlicensed cellular adoption is (fairly quietly) emerging in Japan and China as well.

Plenty of other trials and regulatory maneouvring has occurred elsewhere too, with encouragaing signs by bodies like ITU, BEREC and assorted national authorities that private/local sharing is becoming important. 

In the UK, various bodies including Ofcom, National Infrastructure Commission, DCMS (the ministry in charge), TechUK/Spectrum Policy Forum (link) and others have referenced the potential for shared/private spectrum - and even invited me to talk about it - but until now, not much concrete has happened.


What use-cases are important here?

From my perspective, the main focus in actual deployment of private LTE has been for industrial IoT and especially the ability for large enterprises to run their own networks for factories, robots, mining facilities, (air)ports or process plants. Some of these also want human communications as well, such as replacing TETRA mobile radio / walkie-talkie units with more sophisticated cellular smartphone-type devices, or links to UCaaS systems.

These are all seen as future 5G opportunities by vendors too. They are also often problematic for many MNOs to cover directly - few are really good at dealing the specialised demands of industrial equipment and installations, and the liability, systems-integration and customisation work required.

Together with big companies like GE and Bosch and BMW, there has been some lobbying action as well. CBRS has had a broader appeal, with numerous other categories showing interest too, from sports stadium owners, to cable operators looking for out-of-home coverage for quadplay, or fixed-wireless extensions.

But I'd say that rural coverage, and more generic in-building use-cases, have had less emphasis by regulators or proponents of Micro-MNO spectrum licensing. That's partly because rural uses are often hard to generate business cases and have fragmentary stakeholders by definition, while in-building represents an awkward mix of rights, responsibilities and willingness-to-pay.  

Yet it is these areas - especially rural - that Ofcom is heavily focused on, partly in response to some UK Government policy priorities, notably around rural broadband coverage.


What has been announced?

There are two separate announcements / consultations:
  • An immediate, specific proposal for 700MHz and 3.6-3.8GHz auctions to have additional coverage conditions added to "normal" national mobile licenses, especially for rural areas. This includes provisions for cheaper license fees for operators that agree to build new infrastructure in under-served rural areas, and cover extra homes in "not-spots" today.
  • A more general consultation on innovation, which focuses on various interesting sharing models for three bands: the 1800MHz DECT guard bands (as discussed above), the 3.8-4.2GHz range and also 10MHz around 2.3GHz.

The first proposal is essentially just a variation of "normal 3.5GHz-band national 5G licenses", similar to the earlier 3.4-3.6GHz tranche which has already been released in the UK. Some were hoping that this would have some sort of sharing option, for instance for neutral-host networks in rural or industrial sectors, but that has been sidelined. 

Unlike Germany, which has just 3 MNOs and a powerful industrial lobby wanting private spectrum, the UK has to squeeze 4 MNOs' 5G needs into this band, with a big chunk already belonging to 3/UK Broadband. So, it has stuck with fairly normal national licenses. Instead, there's some tweaks to incentivise MNOs to build out better rural coverage. This helps address some of the UK government's and voters' loudly voiced complaints, but doesn't really affect this post's core theme of private/novel network types.

It is the second consultation that is the most radical - and the one which could potentially reshape the mobile industry in the UK. There are two central elements to its proposals:
  • Local-licensed spectrum in three "shared" bands, with Ofcom managing authorisations itself, with a fixed pricing structure that is just based on cost of administration, rather than raising large sums for the Treasury. There are proposals for low-power and mid-power deployments, suitable respectively for individual buildings or sparsely-populated rural areas.
  • Secondary re-use of existing national licensed bands. In essence, this means that any existing mobile band could be subject to 3rd-party localised, short-term licensing in areas where there is no existing coverage. This is likely to be hugely controversial, but makes inherent sense - essentially it's a form of "use it or lose it" rule for MNOs. 

Local licensing in shared bands
 
The local licensing idea has numerous potential applications, from industrial sites to neutral-hosts to fixed-wireless access in rural districts. It updates the 1.8GHz 2006 low-power wireless licenses to the new approach, and adds in the new bands in 2.3GHz and 3.8-4.2GHz. 

While I'm sure that some objections will be raised - for example, perhaps around the low-cost aspects of these new licenses - I struggle to find many grounds for substantive disagreement. It is, essentially, a decent pitch for a halfway-house between national licenses and complete WiFi-style unlicensed access. Like CBRS in the US (which is much more complex in many ways) it could drive a lot of innovative network deployments, but at smaller scale, as CBRS is aimed at county-sized areas rather than local areas as small as 50m diameter. 



There are numerous innovations here - and considerable pragmatism too, and plenty of homework that's been done already. The medium-power band, and the rural restrictions for outdoor use, are both definitely interesting angles - and well-designed to ensure that this doesn't allow full national/mobile competition "on the cheap" by aggressive new entrants. The "what if?" consultation sections on "possible unintended consequences" and ways to mitigate  them are especially smart - frankly all governmental policy documents should do something similar.

Ofcom also discusses options for database-driven dynamic spectrum approaches (similar to CBRS, white spaces and others) but thinks that would take too long to develop. It essentially wants a quasi-static authorisation mechanism, but with short enough terms - 3 years - that it can transition to some DSA-type option when it's robust and flexible enough. 

(As an aside, I wonder if the ultimate version is some sort of decentralised blockchain-ish decentralised-database platform for dynamic spectrum, which in theory sounds good, but has not been tried in practice yet. And no, it shouldn't be based on SpectrumCoin cryptocurrency tokens).


Secondary licensing of existing bands

This is the really controversial one.

It basically tells the MNOs that their existing - or future - national licenses don't allow them to "bank" spectrum in places where it's not going to be actively used. If there's no coverage now, or credible mid-term plans for build-out in the future, then (as long as it won't create interference) then other parties can apply to use it instead, as long as Ofcom agrees that there's no risk of interference. 

Unlike the shared-band approach (except for 1800MHz), this means that devices will be available immediately, as they would operate in the same bands that already exist. It would also potentially apply for the new 5G bands, especially 3.4-3.8GHz. 

There's a proposed outline mechanism from Ofcom to verify that suggested parallel licenses should be able to go ahead, and again a fairly low-cost pricing mechanism.



Clearly, this is just a broad outline, and there are a lot of details to consider before this could become a reality. But the general principle is almost "use it or lose it", although more accurately it's "use it, or don't complain if someone else uses it until you're ready".

There are a few possible options that have been suggested in the past for this type of thing - leasing or sub-licencing of spectrum by MNOs, or some form of spectrum trading, for instance. In some countries / places this has worked OK, for example for mines in the Australian Outback running private cellular, that have been able to do a deal with one of the national MNOs. But it's complex to administer, and often the MNOs don't really have incentives or mechanisms to do this at scale. They're not interested in doing site-surveys, or drawing up unique contracts for £1000 a year for a couple of farmhouses or a wind-turbine on a hilltop. Plus, there are complexities about liability with leasing (it's still the original licensee's name on the license).

While there will be costs for Ofcom to manage this process, it thinks they should be reasonable - it's pricing the licenses at £950 for a 3 year period. 

All this is pretty radical. And I expect MNOs and industry bodies to raise blue-murder about this in the consultation. Firstly, they will complain about possible interference, which is valid enough, but can be ruled out in some locations. They'll talk about the internal costs of the acceptance process. And above all, they may talk about "cherry-picking" and perceived competitive distortions.

The most interesting aspect for me is how this changes the calculus for building networks indoors, in offices, factories or public buildings. This could limit the practice of MNOs sometimes insisting that enterprises pay for their own indoor systems, for delivery of the MNOs' network coverage and capacity. It could incentivise operators to focus on indoor coverage, if they want to offer managed services for IoT, for example.

There's a lot of other implications, opportunities and challenges I don't have time to address in this post, but will pick up on, over the next weeks and months. There are technical, regulatory, commercial, practical and political dimensions.

I'm really curious to read the responses to this consultation, and see what comes out of the next round of statements from Ofcom. I'm probably going to submit something myself, as I can see a bunch of questions and complexities. Let me know if you'd like me to brainstorm any of this with you.
 

Spectrum is not enough

One thing is definitely critical for both proposals. The availability of local-licensed spectrum is not enough for innovators and enterprises to build networks. There are many other "moving parts" as well - affordable radio infrastructure such as small-cells, inexpensive (likely cloud-based) core and transport networks, numbering resources, SIMs, billing/operations software, voice and messaging integration, and so on. The consultations cover numbering concerns and MNC (mobile network codes), at least up to a point.

In some cases, roaming deals with national networks will be needed - and may be hard to negotiate, unless regulatory pressure is applied. As I've been discussing recently (including in this report for STL - link) this ties in with a wider requirement for revisiting wholesale mobile business models and regulation.


Conclusions

This is all very exciting, and underscores a central forecast of mine, that mobile network business / ownership models will change a lot in the next few years. We'll see new network owners, wholesalers and tenants - even as normal MNOs consolidate and merge with fixed-lie players.

I'd like to think I've played a small part in this myself. I've advised clients, presented and run many workshops on the topic, including my own public events in May and November 2017 (link), and numerous speeches to regulators, industry groups and policymakers. Industry, rural and in-building users need both more coverage and sometimes more control / ownership of cellular networks in licensed bands. 

There will need to be customisation, systems integration and a wide variety of "special cases" for future cellular. The MNOs are not always willing or able to deliver that, so alternatives are needed. (Most will admit, privately at least, that they cannot cover all verticals and all use-cases for 4G/5G). WiFi works fine for many applications, but in some cases private cellular is more suitable.

We're seeing a variety of new network-sharing and private-spectrum models emerge around the world, and a general view that they are (in some fashion) needed. What's unclear is what is the best approach (or approaches). CBRS, German industrial networks, Dutch localised licenses, or something else. I'd say that Ofcom's various ideas are very powerful - and in the case of the secondary re-use proposal, highly disruptive.

Edit & footnote: rather than "secondary re-use", perhaps a better name for this proposal is "Cellular White-Space", given that it is, in essence, the mobile-spectrum equivalent of the TVWS model.

If you'd like to discuss this with me - or engage me for a presentation or input on strategy or regulatory submissions - please reach out and connect. I'm available via information AT disruptive-analysis DOT com

Also, please subscribe to this blog, follow me on Twitter and LinkedIn - and (new for 2019!) look out for new audio/podcast and YouTube content. 

There are two audio segments that relate to this blog post:
Part 1 covers the general background to private cellular (here)
Part 2 covers the specific Ofcom proposals (here)
 

Monday, December 04, 2017

5G & IoT? We need to talk about latency



Much of the discussion around the rationale for 5G – and especially the so-called “ultra-reliable” high QoS versions – centres on minimising network latency. Edge-computing architectures like MEC also focus on this. The worthy goal of 1 millisecond roundtrip time is often mentioned, usually in the context of applications like autonomous vehicles with snappy responses, AR/VR headsets without nausea, the “tactile Internet” and remote drone/robot control.

Usually, that is accompanied by some mention of 20 or 50 billion connected devices by [date X], and perhaps trillions of dollars of IoT-enabled value.

In many ways, this is irrelevant at best, and duplicitous and misleading at worst.

IoT devices and applications will likely span 10 or more orders of magnitude for latency, not just the two between 1-10ms and 10-100ms. Often, the main value of IoT comes from changes over long periods, not realtime control or telemetry.

Think about timescales a bit more deeply:

  • Sensors on an elevator doors may send sporadic data, to predict slowly-worsening mechanical problems – so an engineer might be sent a month before the normal maintenance visit.
  • A car might download new engine-management software once a week, and upload traffic observations and engine-performance data once a day (maybe waiting to do it over WiFi, in the owner’s garage, as it's not time-critical).
  • A large oil storage tank, or a water well, might have a depth-gauge giving readings once an hour.
  • A temperature sensor and thermostat in an elderly person’s home, to manage health and welfare, might track readings and respond with control messages every 10 minutes. Room temperatures change only slowly.
  • A shared bicycle might report its position every minute – and unlock in under 10 seconds when the user buys access with their smartphone app
  • A payment or security-access tag should check identity and open a door, or confirm a transaction, in a second or two.
  • A networked video-surveillance system may need to send a facial image, and get a response in a tenth of a second, before they move out of camera-shot.
  • A doctor’s endoscope or microsurgery tool might need to respond to controls (and send haptic feedback) 100 times a second – ie every 10ms
  • A rapidly-moving drone may need to react in a millisecond to a control signal, or a locally-recognised risk.
  • A sensitive industrial process-control system may need to be able to respond in 10s or 100s of microseconds to avoid damage to finely-calibrated machinery
  • Image sensors and various network sync mechanisms may require response times measured in nanoseconds
I have not seen any analysis that tries to divide the billions of devices, or trillions of dollars, into these very-different cohorts of time-sensitivity. Given the assumptions underpinning a lot of 5G business cases, I’d suggest that this type of work is crucial. Some of these use-cases are slow enough that sending data by 2G is fine (or by mail, in some cases!). Others are so fast they’ll need fibre – or compute capability located locally on-device, or even on-chip, rather than in the cloud, even if it’s an “edge” node.

I suspect (this is a wild guess, I'll admit) that the proportion of IoT devices, for which there’s a real difference between 1ms and 10ms and 100ms, will be less than 10%, and possibly less than 1% of the total. 

(Separately, the network access performance might be swamped by extra latency added by security functions, or edge-computing nodes being bypassed by VPN tunnels)

The proportion of accrued value may be similarly low. A lot of the IoT examples I hear about are either long time-series collections of sensor data (for asset performance-management and predictive maintenance), or have fairly loose timing constraints. A farm’s moisture sensors and irrigation pumps don’t need millisecond response times. Conversely, a chemical plant may need to alter measure and alter pressures or flows in microseconds.

Are we focusing 5G too much on the occasional Goldilocks of not-too-fast and not-too-slow?