Enhancing Serverless Service Reliability through Metadata

Eduardo Elias Saleh
2 min readDec 7, 2023

--

How service-metadata handling can improve your consumer's experience

One of the most valuable aspects of a serverless application lies in its ability to remain decoupled from both downstream and upstream interactions. A well-constructed serverless service efficiently manages its domain, adhering to fundamental principles like SOLID and KISS.

However, maintaining meaningful integration tests becomes challenging as the correlation between incoming requests and downstream service responses increases. Providing valuable feedback to consumers when a provider fails to respond or returns truncated or unexpected data poses another hurdle. In such scenarios, having metadata information within the final response emerges as a critical factor.

Let’s envision a service with specific requirements: it receives requests containing “Client” and “Context” parameters, defining the type of client (e.g., App, Web) and the context (e.g., “Movie,” “Series,” “Other”). Based on these combinations, the service navigates through a matrix of data sources to fetch entertainment suggestions.

Subsequently, it queries these sources — sometimes a single source, other times multiple (perhaps 4 or even 10) in parallel using AWS Step Functions — collecting suggestions from each. Then, employing predefined business rules, another lambda consolidates these suggestions, removing duplicates, prioritizing the highest-quality ones, and filtering out irrelevant ones. This entire process must execute within a strict timeframe of 300ms.

However, testing this service post-deployment presents significant challenges. Data sources frequently alter their service contracts, occasionally disregarding any formal agreements, resulting in responses with inconsistent casing and varying behavior.

To address these issues, we introduced a metadata node within our response structure. How does it function? Consider this example:

During the data-source selection process within the lambda, a metadata collector gathers information on evaluated rules, selected data sources, their configurations, execution times, request IDs, and other relevant data, such as cold-start indications and x-ray-trace-IDs.

In parallel, within the consumer’s lambda, it captures executed consumers, response times, status codes, and raw responses. Later, during the prioritization stage, it records executed rules, execution times, and collates this data within the metadata collector. Ultimately, this metadata, encompassing the entire service flow, is appended to the final response.

From the client’s perspective, this metadata empowers comparisons between data-source responses and our provided response. It facilitates evaluation of response times and identification of offline or delayed downstream sources. Leveraging this metadata, we can conduct behave/integration tests by verifying if the correct code path was executed solely through the presence of specific metadata. Even in scenarios where downstream services are offline, we can confirm if they were appropriately called with the correct addressing and parameters.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Eduardo Elias Saleh
Eduardo Elias Saleh

Written by Eduardo Elias Saleh

Brazilian, 80’s kid, Lily’s father. In love with JS, PHP, C# and Baby Yoda. Dev since 97'. Board gamer always up for an Eclipse match. We created and killed God

No responses yet

Write a response