* Development Purpose
While GPT-3-based natural language processing is widely popular, GPT-3's highly accurate model consists of 175 billion parameters. If each parameter is 4 bytes, it would require 700GB of RAM just to load.
The state-of-the-art large-scale language model, PaLM, has over three times more parameters than GPT-3, totaling 540 billion parameters. It is estimated that GPT-4 will have several trillion parameters (requiring over 4 terabytes of RAM).
State-of-the-art AI systems are designed to be used in a cloud-based environment.
Even for personal use, it is unlikely that there are many individuals or organizations that can afford over a million yen worth of GPUs and provide power supply comparable to a large factory.
Our technology is specialized in integrating the inference part of deep learning AI into existing systems, except for applications using the ChatGPT API.
The inference part operates quickly even without a GPU.
Training is performed using GPUs in the range of 24 gigabytes.
All the technologies we provide have been successfully incorporated into actual applications that are being sold.
You can fulfill the following requests:
You want to run it as a standalone application on the client side, not as a web app.
You want to use deep learning AI even in a 32-bit environment without GPU.
You want to speed up deep learning AI using multithreading.
You want to run over 20 deep learning AI-compatible apps on a single server.
You want to use deep learning AI from languages like C++/C#/C, not just Python.
* Development AchievementsThe learning part is based on using GPU with Tensorflow+Keras, etc. The development is done by calling the inference program (no GPU required) using only the pre-trained parameters from existing C++/C#/Java systems.The following development achievements exist (including ongoing development).
Expanding the functionality of RV conversion to deep learning support while maintaining super speed by narrowing down the candidates for search locations to endpoints and intersections based on the results of conventional RV conversion Specifically, we are developing a chat program that presents in text the process of solving advanced essay questions for elementary school students without using equations and leading to the answer. The accuracy is higher than ChatGPT+GPT-4. We are also developing a customized model by fine-tuning the ChatGPT model for comparative verification. |
* Denoising AutoencoderThe pioneer of the so-called deep learning boom. A general-purpose deep learning model that can be used for various purposes.It has been used for classifying the text of business card recognition results (such as company name, full name, affiliation, title, postal code, address, phone number, email, etc.).
|
* CNN (Convolutional Neural Network)A model for image recognition. It achieves faster learning and higher inference accuracy compared to Denoising Autoencoder in the context of image recognition.It has been used for symbol recognition on maps. It has been used for deep learning support of character recognition libraries.
|
* word2vecModel for natural language processingBy obtaining distributed representations of words, it is possible to represent sentences as arrays of multidimensional vectors. The multidimensional vectors can be directly used as input data for deep learning. This model has been used for language processing in character recognition libraries.
|
* Fine-tuning of large-scale language modelsAPI-ready application for fine-tuning large-scale language modelsThis system is designed for cloud applications and does not support local execution.
|