NGO Xchange is a blockchain platform that solves the major compliance issues of fund traceability, security, and fraud, while offering cost savings on the transactions. Trading under the acronym “NGO”, the NGO Coin is a cryptocurrency pegged 1 to 1 on the USD that provides a transparent immutable ledger to satisfy donor compliance regulations and enables significant financial savings.
Utilizing smart contracts, NGO Coin creates transparent records and real-time funds traceability of donations, expenses, and contracts with suppliers, eliminating the possibility of fraud and simplifying donor/foundation compliance regulations.
The NGO Coin uses a distributed ledger providing instant and unobstructed transfer of funds between HQ and field offices to pay program expenses. This avoids local government corruption, risk to employees carrying large amounts of cash, and easy repatriation of funds.
Using the NGO Coin, financial fees when transferring funds between HQ and field offices and vendors are reduced from current high levels to .15%.
NGO Xchange is built around a easy-to-use interface for non-profit CFOs and Controllers to quickly handle their orginization's money transfer needs.
Visitor activity on the retail client’s busy website generated an enormous amount of data that could not be managed by its relational database (DB) due to its complexity (semi-structured/unstructured), sheer volume, and high costs of licensing multiple nodes of the DB to manage that volume.
To better understand user behavior, product reach, and ad performance, the client needed data that relied on differing source formats, technologies, and tools.
Replace legacy systems to support scalability in terms of volume and complexity, simultaneously reducing the cost of implementation/license/hardware and increasing ROI.
As a retailer capturing user data at the click level, our client wanted a platform in which they could visualize and analyze data. MPP and similar database platforms were unable to complete multiple batches for geos.
Opia's engineers developed the end-to-end solution for the client, further enabling huge improvements in performance with memory tuning for the Hive processes. Our experts could spin up and run the entire platform and proof of concept in less than 100 hours.
Case Study 3: Data science
The client had a core business process that involved funds, bonds, and securities data. They were using an SQL server database and created multiple databases (schemas; stage area, data mart) on the same server. The client had implemented Kimball Architecture for creating a reporting hub in SQL server DB. As the volume of data increased over time, the batch time also was increasing progressively.
Business users wanted the investment data to automatically update by 6:00 a.m. EST with the latest information from the previous business day, so they could receive insights on real-time sticker prices to make educated decisions on marginal trades for that day.
Improve the performance of the batch that was running for more than 12-13 hours during the night. We presented the client with multiple solutions and let them choose the best based on their timeframe, budget, resource needs, and skills.
We presented the client with a solution to split the STAGE and EDM databases on different servers and hardware so there is no I/O or CPU contention, which was showing up in the analysis. This project would require 6-12 months to recode and rewire all the existing processes to point to a different DB connection.
A second solution we presented was to move all the processes and data to Big Data technologies (proprietary or cloud) and use Hadoop Hortonworks platform (open source) to create data-lake and handle scalability through low-cost implementation. This would be the best-performing and most economical solution.