Inside MATRIXX Technology – Blog 2/3: Telco Databases Are Tricky

Dave Labuda
Oct 16, 2017 TECHNOLOGY

In my earlier blog, I looked at the rise of vertical-optimized databases. In this one I want to take a deep dive on the unique requirements that Telco’s place on their database technology.

Telco’s rely heavily on databases for many core business functions. If we explore the journey through order-to-cash we encounter data sets with diverse characteristics that inherently dictate the appropriate database technology:

 

Traditional customer data is accessed primarily by customer care agents and other customer facing channels to process customer orders and to deal with queries using a classic model that is well established across many industries.  Salesforce.com and others provide robust solutions based on relational databases that are well suited to the task of retrieving and editing a complex dataset for each customer that could span many years. Relational databases were really the first mainstream technologies on the scene and are still highly applicable in this case.

For financial and analytics applications accessing EDR’s, however, relational databases have become obsolete. In the flip phone era, the average subscriber created 10 – 15 event records per day. Ten years into the smartphone era, many subscribers are creating hundreds to thousands of usage records per day, since the usage is no longer tied to one human performing one task at a time (i.e. making a phone call). In addition, real-time analytics and campaign management are becoming table stakes for digital interactions and upselling, so this huge volume of data must be searched and analysed with very low latency. Because the event records are immutable, full transactional semantics are not critical so the recent wave of web-scale NoSQL databases, such as MongoDB, have shown key advantages for this data category. They offer extremely performant and cost-effective solutions for storing vast volumes of immutable data that can be used for post event processing.

Real-time Charging and Policy data is where things really start to get interesting – I see it as the engine room for any Telco that wants to become a digital service provider (DSP).  Keeping up with the volume of real-time transactions used to be straightforward – post-paid subscribers were all managed by batch systems, and early pre-paid systems, while real-time, were handling 10 – 15 events per day per subscriber and largely just counting minutes – the same business logic that runs in a parking meter.

The world today is fundamentally different – all digital services are by their nature real-time and on-demand.  People use their phones more than 150 times per day, creating hundreds to thousands of transactions per Smartphone. Payment methods are also converging with customers combining prepay with post-paid offers but expecting the same precise, instant experience.  The tariff plans have become far more complex and layered with content entitlements and other promotions.  There is also greater demand for self-care journeys and on-the-fly service personalization, all delivered through modern digital channels that rely on instant interactions.

If you want to keep your customer balances correct, give instant access to services and ensure customers pay correctly, you need a database that can keep up with the transaction and update volumes under every circumstance.  You can’t let customers spend the same dollar twice, it’s bad for business, so you need to maintain accuracy right down to the cent, every second of every day.  When you add in ‘always on’ smartphones, millions of subscribers expecting instant gratification, and ubiquitous sharing, you can see why Telco’s need one hell of a database!

To maintain accurate real-time data in this new world of chaos, you need a database that’s both ACID compliant  and able to handle the extreme volumes with very low latency. The unique challenge facing databases that support Real-Time Charging and Policy relates to the high write intensity of the data.

Compounding this problem are the unpredictable, highly concurrent actions happening simultaneously from network systems and customer activity, creating extremely high volume, high complexity updates to the database.  The traditional approach of relational databases to achieve ACID compliance is to lock the data used within each transaction, thus serializing any transactions that would have collided (overlapped) and cause a data integrity violation. This approach works well for ensuring data integrity in a sprawling, low velocity data set like traditional customer data, but severely impacts performance and latency for high volume, high complexity real-time systems.  In-memory databases like TimesTen provide a small amount of relief, but they are still built with the same locking principles, so they have no hope of keeping up with the flood of on-demand digital transactions.  Web-scale technology, such as Cassandra, is also ill suited to transaction processing as it relaxes certain ACID compliance criteria to handle the volume. Recently some vendors have offered ACID compliancy layered above web-scale technology databases, but if you look under the covers this approach imposes the traditional locking model and therefore destroys the “web-scale” aspects of the database underneath.

Michael Stonebraker, a pioneer in the world of database technology, recognized these limitations while building a massively scalable stock trading platform. His solution was to build VoltDB with a radical new approach to ACID compliance. Since each stock trade impacts a very small set of data, and the data needed to trade Stock A vs. Stock B is not overlapping, he could partition the database into many small sets of independent data and run transactions within any one data set in a single processing thread. This effectively serializes all trades to the same stock, but allows trades for different stocks to run at full speed in parallel with no locking. This approach works brilliantly for simple transactions across isolated data sets, but fails quickly if applied to the complex, overlapping data sets required for modern Charging and Policy logic, especially with the explosion of shared plans..  In short, the Digital Telco environment presents unique challenges that are not addressed by any of these database technologies, and their needs will only be magnified as mobile devices and digital consumption continues to grow at breakneck pace.

In my final blog in this series I will talk about how we handle high volume, high complexity transactional data here at MATRIXX using our patented Parallel-MATRIXX database technology. This database sits at the heart of our Digital Commerce Platform and is one of the key technologies that sets us apart from everyone else offering digital transformation solutions.

 

Blog 3/3: Fixing the Telco database problem with new technology

 

NEXT BLOG

Previous Blog

 

¹ ACID compliance (Atomicity, Consistency, Isolation, Durability) is a set of characteristics of some databases that guarantees transactions run in parallel will result in the same end state as if the transactions were run serially.  Financial systems such as Charging & Policy must provide a precise, always correct answer and therefore require ACID compliance. Web-scale database technologies, while highly scalable, do not provide ACID compliance.

 

Pin It on Pinterest

Share This