MEF PoC 137 – Rapid, scalable partner onboarding & interop testing for API Driven Inter-provider Service Automation
As customer pressure for globally accessible, on-demand network and cloud services increases, service providers now more than ever must be able to seamlessly negotiate with tens or even hundreds of partners to achieve the reach and range that’s needed. Traditional disparate means of engaging partners using proprietary portals or APIs is not sustainable.
Fortunately, Providers can now standardize the way they negotiate (Buying and Selling) with partners using MEF LSO Sonata APIs. LSO Sonata is a set of industry-standard APIs developed by MEF to cover pre-order, order, and post-order processes for inter-provider negotiation of standard and non-standard products/services.
Once a provider decides to implement MEF LSO Sonata APIs they then need to ensure their implementation is compliant and compatible with those of their partners. This blog looks at how Amartus, AT&T and PCCW Global collaborated through the MEF PoC 137 to implement MEF LSO Sonata compliant Buyer & Seller emulators providers can first use to develop their Sonata implementation to achieve compliance and then to create and host partner-specific configurations for pairwise inter-op testing.
Streamlining the onboarding of MEF LSO Sonata-conformant operator partners
To help service providers accelerate the onboarding and interoperability validation of partner providers, Amartus partnered with AT&T and PCCW Global in MEF LSO PoC 137: Partner Onboarding and Inter-op.
In the Proof of Concept showcase, the companies demonstrated how service providers could use a MEF-facilitated service to achieve efficient, scalable onboarding and interop testing with tens or even hundreds of partners. The PoC implements the LSO Sonata On-boarding and Inter-op verification solution which was defined by the MEF Commercial and Business Committee: the details of which have been described in this MEF LSO Solution Guide.
By applying the LSO-based solution described in the guide and presented in the PoC, providers can significantly scale up their onboarding process to support much larger numbers of such partners in parallel.
How can these goals be achieved?
Watch the full video recording or read the interview transcript of the Panel Q&A session below.
MEF LSO PoC #137 [VIDEO]
LSO Sonata APIs: Partner Onboarding & Inter-op Verification [TRANSCRIPT]
Aidan Anderson Access Management Director | AT&T



What is the primary benefit for the Buyer in using this emulator test environment? For example – let’s say a buyer has ten suppliers ready with pre-order and order waiting to be onboarded. How is the seller emulator reducing the timing involved?
I would say, from a buyer point of view, some of the advantages – we can test it at time suitable for the Buyer. One other thing that is really important when we’re working with sellers in different regions of the world, is that the time zone differences can make it very difficult to have live interactive test sessions. So being able to do this buyer emulator provides a significant advantage in being able to test at a time when it’s suitable to us as a buyer.
I also think being able to develop and test both against reference implementation of the Sonata APIs and against seller-specific implementations. We can talk a little bit more about that later.
One advantage that we see or we hope to gain is to be able to develop and test against new versions as they come along, whether that’s the Billie Release or Celine Release or beyond, allowing us to upgrade in parallel with our partners, do the development and tests on the new versions and then upgrade in a more seamless process.
Finally, I want to point out the main benefit I see here is that it significantly reduces the buyer resources required to support each Seller testing and onboarding.
Right now, we can probably support three or four suppliers in parallel with the development resources we’ve got. By using the emulators, we can increase the parallelism significantly, allowing us to accelerate onboarding and adoption and drive the rates of global adoption.
When you’re onboarding and you’re testing onboarding, what proportion do you think can be done using this inter-op testing approach with emulators, and how much needs to be done by you as a buyer in any case, with or without the emulator?
To answer that question, I’ll answer it in two ways. One, I think, 80% of the work is simply getting the implementation right compared to the Sonata reference.
There’s a lot of misunderstanding and misinterpretation of the standards. What we often find in the development and testing phase is we’re simply finding and fixing very simple code fixes and misinterpretations of the definitions. So I expect that, against the reference and implementation, it would save about 80%.
And now, we could probably save another 10% if the Seller was able to test against an AT&T-specific buyer implementation so that they would understand the AT&T-specifics and the nuances of the implementation and test against them.
And that would leave us probably with about 10% where we would actually have to do kind of pairwise testing in our real test or stage environments before migrating into production.
So my expectation with the emulated environment is that we could save between 80 and 90% of the effort that it currently takes to onboard each supplier.
That’s a dramatic increase in efficiency in terms of the number of partners that you can onboard and use of your resources. It sounds like a factor of something like five to ten increase in productivity.
Yes, and if you think about it, if we think in terms of the development resources that we’ve got, if I can support three to four at the moment in parallel, if I can get that kind of efficiency level, then I can support thirty in parallel. That’s the scale of difference I’m expecting to see from this.
By using the emulators, we can increase the parallelism significantly, allowing us to accelerate onboarding and adoption and drive the rates of global adoption.
From the point of view of the Seller, what’s the primary benefit in this solution, and how would that buyer emulation help the Seller when they have a large number of partners they wish to onboard?
If you look at that, currently, we have a number of buyers who would come in and adopt the APIs in order to transact with us. In this particular case, that’s for Access E-line.
I see the reference implementations that have been adopted in their emulators. That would definitely be the key benefit for buyers who want to integrate with sellers like us.
Without the emulators in place, most of the information that a Buyer would be getting would be based on the Sonata specs that are available online, but you do not get that detail of information in terms of JSON request response like you do with the reference implementations in the emulators. And that’s very much required when you’re developing the APIs so you can do that against standard.
The second most important thing is actually just the time of engagement between the Buyer and Seller. So development teams working with each other need to have a common time window for interop testing. In many cases, that’s not possible.
And if we take that buyer and seller teams and increase it by a factor of ten, so now you have ten buyers who are wanting to connect with you, you need to have a team of people just to go through that process.
And what we found is that this emulator would help in mitigating that part of the problem. You can kind of develop in isolation with the other partners through to the time when we actually start interop in the last 10% of the process with the other partners. Otherwise, you use the emulators and complete your majority of the implementation. And those are the two key benefits from a buyer’s perspective.
Having clarified the value proposition areas for this emulator-based solution – who makes the decisions in the respective organizations to employ this, cause it’s large organizations, complex decisions that need to be taken. Who is going to say ‘OK, I need this, we’re going to need this solution’?
It varies greatly from one organization to another in my experience. In small organizations, you may have a single decision-maker, for example, Head of Digital Strategy or Chief Product Officer or somebody like that.
But in reality, of most organizations, the decision is made by combination which may include development, technical leads, product leads, commercial leads.
It depends partly on the Buyer and seller side, which side you’re on; if you on the sell side, it might be the product who takes the decision. If you’re on the buy side it might be a commercial person who takes the decision. Normally, there’s more than one and they need a collective agreement in order to move forward.
Another thing to throw in the mix is many of the partners may have an in-sourced or out-sourced environment for development.
So if you look at the in-sourced environment, immediately you see if you look at the skill and agile approach that people have departments of development teams, I think System Architects would be key to actually look at this and see whether this helps them save time and effort and they could have a say in that.
However, I think the decision would be with someone with commercial responsibility, like a CIO who takes that input from a System Architect who’s working on that development.
In an outsourced environment, the decision probably would be left to the company which is working on an outsourced project basis.
In many cases, I think, normally I’ve seen that outsourced resourced they would have developers sitting in their teams so they might want to do it in-house. It depends, so mainly, if it is in-sourced, I would say this is left to the System Architects and the CIO.
And that’s very much required when you’re developing the APIs so you can do that against standard.
Michael, you’ve been tracked with a lot of service providers in this context. What’s your experience?
I agree with what Divesh and Aidan have mentioned, what we see a lot is that there are two aspects to it.
There are the stakeholders on the business side who look at it as a mean to go to market quickly with Sonata. They look at the benefits and the total cost of ownership of adopting Sonata and being able to interoperate in a cost-effective way with their partners; that’s very critical.
But the developers, on the other side, they’re very interested whenever we talk to them because the first thing the developer wants to know when you ask them about introducing a new API, they say ‘OK, but what can I test against, can you give me examples, can I have something to work with there.
And those Postman scripts that Konrad was showing, the emulators can generate; these are very critical because this is essentially what they need to code towards; they need to build their APIs and work from that. That’s not a trivial task to do, to build up from the specs alone.
So having a tool can actually help you to generate those in the correct way and in a compliant manner for your specific scenario is very, very powerful and very important for the developer community. We found that people get very excited when we talk about the emulators because they see daunting tasks ahead of them looking at the spec, and they’re saying, ‘OK, this is really going to help me to accelerate that process.’
Konrad mentioned during the demo custom configuration and Aidan alluded the 80% and 10% and 10%, one of those 10% was the specific configuration to the Buyer or the Seller. Can you elaborate on that?
Sure. This is a critical thing. With each pairwise engagement you have with your partners, you’re going to have specific products to test with each; you’re going to even have the specific ways in which you engage with your partner.
For example, you may want to do synchronous quoting and product order qualification, and you may do asynchronous ordering or you may just choose to do some specific Sonata functions and not others.
You also can be working on different releases of Sonata, so the emulator needs to support all the combinations that you could possibly have with different partners.
You need a quick and rapid way to be able to efficiently configure the emulators for those individual partner scenarios that doesn’t take a huge amount of time and effort.
That’s really a key part, you need to emulate as close as you can to the real solution that your partner needs to test against. Otherwise, your partner’s going to have to do a lot of retesting when they come back and actually point towards the real system they want to test against.
If I may add a point to what Michael just mentioned. One key thing about testing is actually the end result. So when a partner is testing against several specs, you might be consistent with the MEF defined specs here, but another aspect is that you need to have the seller specific configurations in mind, so what’s allowed, what’s not allowed, what’s available in terms of resources from a specific Buyer-Seller combination.
So that was the key because if you are 100% consistent then and every time you send a request, you get a no-response.
So the success rate of the API would depend on your specific Buyer and Seller testing and to mitigate the negative cases and increase the positive cases so that majority of the requests you send out actually get you a better response. That’s the whole sense of this thing, right.
So can you give us an example of how you can use this in order to introduce new products into your portfolio and make sure that your partners can buy them easily?
So, I think one good aspect, which was touched and mentioned in the demonstration that Konrad did, it is easy to have new product onboarded as long as the specs are all there.
There is a product Catalog in this emulator which lets the Buyer look at what is available there and test against the specs. In this case, I think we have Access E-line product ready, and we can go on and onboard Internet Access product in the same developer profile and then that can become another testing environment for the buyers, so it becomes a common tool to support whatever kind of product that you’re onboarding after that.
Do you envisage sellers introducing more and more products related to compute or voice or other types of products that could take advantage of this or do you see it primarily for the classic connectivity types of services?
I think that API is only just fueling, so you have a mix of operators there, there are cloud operators, there are operators who are providing just cross-connections in the data center, for example, and then there are connectivity carriers out there with various different types of connectivity services, there is MPLS, there’s SD-WAN, there’s Internet Access.
So I think that we could expand that more than what is working right now in the context of MEF-defined LSO Access E-line service to two forms. One is other MEF-defined services but then are also cloud services and even cellular mobile services.
Aidan, we’ve also thought about this and I think that you feel right now the bulk of the issue for you at the moment is Access E-line and Internet Access. Do you have thoughts about this product’s versatility in the context of emulators?
Yes, I think firstly, it’s very important that Sonata is product-agnostic in that sense so as we develop new products, we can bring them to market using the same basic structure of the Sonata APIs. So that’s in itself is very important.
I think what we’ve shown from the emulated environment is that it is pretty simple to stand up a new product in that environment and test against it. So if we decided that dark fiber was the product we wanted to start buying from PCCW next, that would be relatively simple to add it into the emulator environment and do testing against that.
As Divesh says, I’m based in Europe, and he’s based in Hong Kong, and the time zone differences often mean that we’ve got one hour per day that overlaps, so being able to do that sort of stuff in separation, in an emulated environment, allows you to progress at a much faster pace.
You work a lot also with smaller partners. Do you see a difference in terms of the applicability depending on the size of the sellers that you’re working with utilizing this type of platform?
Yes, to be honest, the issue is always with the amount of development and test resource that we have available.
We can operate with a certain bandwidth, so quite clearly, buyers and sellers will prioritize based on the volume of business. In today’s world, it is challenging for small providers; whether they’re on the buy-side or the sell-side, it’s difficult to get high enough up the priority list, so they actually get onboarded.
What the emulated environment does is it allows them to do all the work themselves in isolation at their own pace and when they’re ready, when they finish the testing, they finish against the Seller or Buyer specific emulator, they are ready to go.
So it means, from my point of view as a buyer, a small seller can come along and I know it’s going to be simple and fast to onboard them, so prioritization becomes much less of an issue if the effort to onboard them into production is relatively small. This environment is a real significant benefit for the small operators in the industry.
And that’s huge for a lot of small providers. So, Michael, once you’re being onboarded, what type of value does this type of platform give in terms of ongoing support for buyers and sellers? And how did it fit in with the certification MEF offers and the value that provides?
Obviously, as you mentioned earlier, it’s not a static thing, so engaging with your partner is a dynamic thing where you’re constantly updating the product engagements that you may have, different types of products you might sell.
You might also introduce new versions of Sonata. Maybe you’ll add additional functionality like trouble ticketing or billing in the future, and you’ll want to be able to test against that additional functionality and reestablish the interoperability.
The purpose of the emulators and the whole interop solution is to support the activity around getting that interop working with your partners on a continuous basis. It’s more akin to the driving lessons that you would take if you’re learning how to drive and getting everything ready. And the certification, on the other hand, is more like a qualification that you reach that says ‘My APIs conform to a specific compliance with the API standard.’
So one is really about the operation of the standard with the partners, the other one is more about certification that you can take to people and say ‘look, I’ve reached this level so engage with me, I’m ready to engage with you’, and that’s the difference we see between the two. And they’re complementary in nature because they share the same set of test specification requirements which is around the MEF W92.1 test requirements specification.
I imagine that they aim at different parts of the organization as well. This probably is going to be more aimed towards the response for the development and getting onboarded, whereas the certification might be more of interest to product management and product marketing types, needing to make sure that products are positioned correctly. Would you agree with that, Michael?
That’s correct. That’s certainly where we would see the two different reasons for the different applications of solutions there.
Moving on to Konrad, an important question that many will ask is if they want to test the different parts of the lifecycle separately or different combinations, is this emulation interop solution set up for that?
What you mentioned, Daniel, is how MEF W92.1 document defines the LSO Sonata test cases. The solution will provide wide-spanning support for that spec.
Considering that an individual API test consists of a prerequisite section which is a particular seller configuration, test action and the response evaluation, we can say that the solution will provide support for individual stage testing as you mentioned in your question.
So they can package it up the way they want to. They don’t have to do everything at the same time and they can combine different parts how they see fit. How do they get the test results, how are they recorded, how do they act upon them?
We have to distinguish two cases here. We have the Buyer emulation and the Seller end. For the Buyer, we use standard API test tool which is Postman and its CLI client called Newman. It is a familiar user experience that is provided here, which basically covers the viewing, recording, analyzing of the API call executions.
For the seller emulation, it is the solution that provides the run time environment. It supports multiple inventories, where we store historical LSO Sonata artifacts. And those can be viewed and analyzed at any time. And we even provide some basic statistics in that regard.
On top of this, we have observability features. Those leverage elastic search tools for collecting and visualizing the run time data. It simply allows for accessing and analyzing the LSO Sonata request processing and all the details around it. It’s very useful in terms of troubleshooting the unsuccessful executions, like the ones when you, for instance, send a broken payload.
When a seller has gone through this process and they feel comfortable that they basically got to the point that 90% point that Aidan was referring to, how do they communicate that to the Buyer?
How do they demonstrate them, ‘look, we’ve tested all of these things, it all looks good, it’s worth your while spending more time, that last 10%, in order to get us across the finishing line’.
How do they communicate the results from the interop platform?
The Buyer in this testing is represented by the Postman test collection that is run against the seller system.
Simply you can collect results of this test collection execution and record it and present it to your partner by showing that out of a number of test cases defined within it, all of them are passing or maybe some are not, and require further follow up and discussion with your buyer partner.
We’ve talked about buyer-specific configurations. How are they stored? Let’s say a specific buyer like Aidan, says ‘I want this configuration to be in the interop solution’. How do they provide that and how do the sellers access it?
Buyer communicates their behavior in the form of the Postman execution scripts and they themself are the documentation of the buyer emulation system.
They basically provide information in terms of environment variables that would represent specific configurations. Then they could be obtained from the test collection itself.
The Postman scripts are written by the Buyer according to the documentation that’s provided to them, loaded into the system and then the Seller can use those configurations as they want?
Exactly.
One very important point that you demonstrated in your demo was the fact that actually, you’re supporting two product payloads. One is the classic Access E-line that’s being matured over quite some time and then the new pre-standard product payload, MEF Internet Access, which is based on existing standards of MEF but it’s still going through the final stages of actual standardization of the product payload specification level.
Can you tell us a bit about how quickly you did that and how quickly that was achieved?
And then, please explain if non-MEF product payloads come along, not even necessarily from the telecom industry but from other industries that are part of digital transformation environment, how quickly you introduce those into the interop platform and what format they have to be?
Let me quote what Aidan said earlier. He mentioned that LSO Sonata is product agnostic. So are the emulators.
It means that they can be used with any type of the product for as long as it fulfills a basic requirement which is to have a defined product schema.
It’s a very minimal requirement which basically allows to use either standard product definitions, draft ones or even experimental.
In emulators, we also decouple the product layer and the LSO Sonata protocol layer. This allows us to combine any product type with any supported Sonata release. For this PoC I was showing how you can use, for instance, Carrier Ethernet Access E-line or how you can configure it and use other product type which in my case was Internet Access with the Sonata R4 release.
Currently, we are working on supporting other releases like Aretha, and as soon as Billie is released, we will upgrade to that one shortly. But it’s the nature of the emulators that you, by configuration, can combine any product type with any supported Sonata version.
Keeping up with the latest releases, what sort of turnaround time do you expect to have once a given release comes out, for example, Billie is due to come out the end of May.
How quickly do you think such a platform would be updated to support such new release?
We estimate that if the release is not containing major changes for the API business requirements, we estimate the time we need to scale the emulation support to that release to be 1-2 development sprints which in our case is 2-4 weeks.
So we’re talking about about a month afterward, the interop platform should be updated to the latest release?
That’s correct.
Let’s just have a few wrap-up questions from the audience. To Aidan – do you want to highlight any key points that you think are important to this audience that they should take away with them?
I just want to reflect back to some of the points that were made few Times around the seller-buyer-specific implementations that you can test against.
For me, that is a fundamentally important part of what we’re doing because each implementation, each supplier product has differences in terms of what bandwidths they support, what physical interfaces they support, with bandwidth combinations and things, and if you only develop to a reference standard, you’ll try to go into production and immediately fall over when you find that your expectations versus what the seller supports are miles apart.
For me, that was the key learning point through this, is just what is the importance of having multiple buyer-seller-specific implementations that we can go and test against.
I think throughout the last three months that we’ve been through the PoC, and we’ve been testing and onboarding the adapters on Buyer and Seller side what I’d like to summarize is that newer partners who have not yet implemented the API probably they’re looking for API onboarding process which is simple and painless.
In terms of multiple partners coming in, this process needs to be scalable and efficient. That is what this PoC environment emulator addresses. They’re providing the process that becomes very simple and ready to use and at the same time, it increases the scalability of the systems and you can onboard multiple partners at the same time.
Michael, let’s just summarize where we are now and where we go from here.
In an incubation group in MEF, with leading service providers represented by Aidan and Divesh, very active in this incubation group, there’s a document – we’re not sure how we’re going to name it – but the audience should think of it as a solutions handbook or the equivalent that documents and guides them on how to use this interop environment effectively. We’ve seen this demo as part of MEF 3.0 PoC 137; where do we go from here?
This demo was a big step in verifying a concept, so we’ve been working in an incubation group with the wider group of providers and vendors.
Our plan now is to take this into commercial pilot in the second half of this year and that will coincide with the Billie release, which a number of providers are actually planning to upgrade to it. So the existing providers are planning to upgrade, or new providers are coming in and are looking to adopt Sonata for the first time.
So we will support the R4, the Aretha and the Billie release in that commercial pilot. We’re working on engaging a number of providers to sign them up for that pilot which would run to the end of the year.
And this is a lead-in to a general availability service solution that would be provided through MEF to the members starting from 2022.
The commercial pilot in that sense, though, would be a full-blown access for people who want to access the actual solution, because the whole idea is to get this to people quickly so they can leverage it and we can increase the adoption among providers and make it more accessible.
That’s the current plan right now. We’re discussing how to bring this commercial pilot into reality.
So, the commercial pilot would be fully featured, it would be available to a certain range of participants that are committing to support it and the lessons learned would be used to improve the full production product.
We’ve actually introduced in this proof of concept a minimum viable product; it’s actually a working solution, it’s not just a demo that we were showing you. It’s based on commercial software that is in production.
So what we’re looking at then is continuing to expand that for the new releases that would be available. It’s under discussion exactly who would have access to the commercial pilot, but I think it would benefit to have a lot more people have access to it in the commercial pilot.
That’s a stepping stone to the general availability. And we hope to be able to update people on that in the next few weeks.