Crypto

How we Managed Big Data from APIs [DeepDao Case]

October 04, 2022 • 187 Views • 13 min read

author photo

Tetiana Stoyko

CTO

When you are considering technologies, related to cryptocurrency or blockchain, it must be obvious to you that in this field there are a lot of big data transfers and management processes. In fact, it is impossible for the developer to imagine this industry without a massive array of data. It would be fair to admit, that it is crucial for this sphere to be able to manage big data in a correct way.

To illustrate the importance of such aspects, we propose to examine one of our recent cases. Apart from explaining the main aspects of how to manage big data or how data transfers work, we would like to consider some lifehacks on how to improve your cryptocurrency project by increasing its performance.

Briefly About the Project

DeepDao is an online platform, which belongs to Decentralized Finance(DeFi) systems. One of the main features it proposes is up-to-date gathering, analysis, and compilation of various statistics both quantitative and qualitative with further displaying them in a dashboard format for the public. It is already obvious that such functionality requires the ability to interact with various databases, receive all the needed data from them, and organize safe and stable further data transfers.

Additionally, it is worth explaining the “Dao” part. It stands for Decentralized Autonomous Organizations and, in fact, is a unique project management approach, which is based on blockchain technology, changing the overall hierarchy of the organization itself. For instance, organizations, based on this principle, are horizontally governed i.e. there is no CEO, CTO, CFO, or any other “Chiefs”. All the processes are automated and managed by blockchain technology.

No individual within the organization has any privileges or access to various assets. Instead of an individual approach, the organization is ruled by the voice of all principles. In other words, to get access to, let’s say, treasury, the person has to get the approval of other organization members. DAO is possible thanks to smart contracts.

Tech Stack and Tech Task

Clearly, such a project requires a lot of work, related to the databases. Big Data management and data transfers are especially important. For instance, within the DeppDao case, our developers had to operate and interact with at least 3 different databases: 1 external data source and 2 internal DBs.

The approximate Tech Task is to gather data about various crypto assets from different sources, compile the gathered data, and display it to our users.

Thus, we need to interact with third-party databases. Partly, for this gathering purpose and interaction with third-party DBs, we use various API integrations like Zerion API. It helps to collect information about various crypto tokens. Also, we can name a variety of other services and API management tools, that are integrated into our project. For example, Daohaus, Aragon, Amberdata, Covalent, Snapshot, Subscan, Bitquery, Subgraph, Etherscan, and many others.

However, gathering data is not the final task. Undoubtedly, it must be stored somewhere. This is why we also need to integrate a database into our project. Actually, in this specific case, we used two different DBs for some reason. Thus, we have PostgreSQL, a SQL database, which is used as the main data scalable storage within the DeepDao project. It is one of the best choices, due to the fact that it is a common and famous database, that is able to perform most required tasks and is easy to work with.

Nevertheless, we also used Neo4j, a relatively new Graph DB for a few specific tasks in order to improve the overall performance of the application.

API Integration Cases

The number of API integrations in the project is stunning. Each of them is used for specific and important purposes. Doubtless, it would take hours to explain and consider each specific use case. Therefore, let’s limit ourselves to a few examples, which will illustrate the overall logic and use principles, that can be applied to each API.

Zerion API

As was mentioned before, Zerion is used to gather various data, that is related to various assets. To be more precise, it allows our app to get the treasury data in an automated way. In simple words, treasury data consist of crypto tokens, that belong to a specific treasury address, and their summary, cost, etc. So, Zerion API calculates these assets, takes into account all known tokens and checks them, determines their price, combines all this data, and returns it to us in a ready form.

Probably, it is impossible to overestimate such an analytics tool, as well as its role in the overall working processes. For instance, the alternative way to perform all actions, proposed by this API, is to perform them manually. However, it is not as simple as it seems.

The first issue that occurs in such a case is that the required data is divided and lies in various fields or services. Thus, the first problem is logistics itself. Secondly, each blockchain has its own operational logic. As a result, it would be impossible to create a one for all solution, i.e. each of the chains, we are interested in, demands a specific approach. So, it will take additional time and resources just to connect to each of them and to get the requested data. Moreover, due to different system logic, these blockchains will probably store the data in a unique way, so it will take even more time to somehow standardize data units from various sources so that our system was able to read and operate them equally.

Finally, the calculation and validation processes themselves are very resource-intensive. It is not a big secret, that in the cryptocurrency industry every decimal matters. Add here also the fact, that most transactions consist of decimals only and you will get an incredibly difficult task for a person to calculate all of them without mistakes. Or, you can simply use Zerion API, which will automatically take care of all the foregoing challenges and bring you the requested result as soon and precisely as possible.

Aragon API

Aragon is regularly considered a platform. In fact, it is a launcher and management tool for DAO projects. Obviously, thanks to Aragon it is possible to create your own DAO project and get any data, related to such Aragon-based products. Yet, the creation is not a one-and-only function of this platform. As we clarified before, operational logic and architecture may vary, which results in different additional requirements and challenges. For instance, it is possible to develop own DAO project without using such tools. However, it will take more time and the architecture may differ from the alternatives. As a result, developers will have to send even more resources to standardize the data units, and only after this action, they will be able to proceed to work with it.

On the contrary, with the use of Aragon developers can simply create similar DAOs, based on the same architecture. In fact, Aragon is flexible enough and allows for the creation of architectural presets, with the ability to change or set some specific parameters. To make it simple, thanks to this platform it is possible to simplify the creational process and set similar rules for numerous projects.

Database Use Cases

The heart of each program is its code, while its brain - is the database. Thus, it is important to make the right choice in order to avoid unneeded issues with data in the future. Usually, various people recommend choosing SQL-type databases due to their strict structure and logic, which is better to understand and work with. Thanks to their relational nature, it is easier to manage and track structured data. However, despite all of their advantages, these databases still cannot be considered the ultimate solution.

For instance, in the DeepDao case, our developers decided to use 2 different DBs in order to avoid performance issues.

PostgreSQL Database

This SQL database is the main one within the project. In fact, it is a great tool, that can take care of most processes like saving or tracking data, rewriting it, etc. However, sometimes we have to work with types of data, which consists of numerous relations.

When the number of relations or connections between various data blocks is high, Postgres will get confused. Actually, it can calculate and connect all these information blocks, but it will take much more time and resources, and impact the overall database processes as well. As a result, such numerous relations are impacting the general performance of the platform.

Neo4j Graph DB

In order to avoid such unpleasant circumstances, our developers decided to add a second database, which will work specifically with these types of data. As a result, it won’t impact the performance so much. Moreover, when we are choosing a specific tool for specific purposes, we ensure that it fits the best. Therefore, our development team chose the Neo4j database, which is probably the best possible DB for such specific tasks.

Neo4j is a graph database, that can easily find relationships and work with them. Thus, thanks to the combination of these two databases, our team managed to ensure the operation of multi-layer tasks without significant impact on platform performance.

How to transfer data between databases?

The only problem that occurred with this solution is the fact, that we have two different databases and data, that are stored in the first database but operated in the second one. Hence, we had to figure out how to transfer data between databases back and forth. Yet, it is worth admitting, that this is not a hard task.

For this purpose, we created a simple loader, based on some JS scripts. In fact, all such data transfers are performed by loaders, including data transfers from APIs to the main database.

So, the general working scheme of the DeepDao platform looks like:

  1. API integrations gather and operate data from third-party APIs.
  2. Code-based loaders transfer worked data to PostgreSQL.
  3. In case it is data, related to complex or numerous relations, it is transferred from PostgreSQL to the second database Neo4j, where this data is processed.
  4. After it is done, data from the Neo4j database is transferred back to Postgres via a handmade loader.

Summary

It is impossible to imagine working DeepDao platform without API management tools and data transfers between databases. In fact, each project, especially the one, related to the cryptocurrency market, is a collection of untrivial out-of-box solutions, that may make no sense separately, but when combined - they can cover each other's flaws.

If you have any additional questions or willing to share you opinion on the topic, you should know, that we are always ready to listen.

Share this post

Tags

Tech
Expertise
Case Studies

What’s your impression after reading this?

Love it!

Valuable

Exciting

Unsatisfied

Got no clue where to start? Why don’t we discuss your idea?

Let's talk!

Contact us

chat photo
privacy policy

© 2015-2024 Incora LLC

offices

Ukrainian office

116, Bohdana Khmel'nyts'koho, Lviv, Lviv Oblast, 79019

USA office

16192 Coastal Hwy, Lewes, DE 19958 USA

follow us

This site uses cookies to improve your user experience.Read our Privacy Policy

Accept