❇️ Neurolanche - Next Steps

Everyone is eagerly awaiting those answers.

Since Sota closed the discussion after my last message, I’m reopening this thread to urge the new council to take decisive action based on the fraud Leo has consistently perpetrated through dApp staking. As stated in the Community Council’s guidelines:

“The Council can unregister a dApp from dApp staking if necessary. A 4/5 majority agreement is required for additional scrutiny and consensus.”

It’s easy to mislead people on Twitter with marketing buzzwords and flashy diagrams, especially when most users don’t have the technical expertise to see through the façade. However, for those of us who do, Leo’s claims regarding Nerox AI raise glaring red flags.

If Leo is genuinely building an AI model capable of emotional intelligence, real-time multimodal interaction, and empathetic responses without leveraging state-of-the-art tools like ChatGPT, let alone foundational frameworks from OpenAI, Microsoft, or Hugging Face, then it’s time for him to provide hard evidence. Here’s the challenge:

Key Questions Leo Must Address

  1. Proprietary AI Development

• You’ve claimed to not rely on foundational models like GPT. Can you prove this? What is your custom NLP framework, and how does it work?

• If you’re not using ChatGPT or similar tools, explain why Azure AI Foundry is being name-dropped without providing architectural specifics.

  1. Benchmarks and Performance

• Where are the benchmarks comparing your model’s performance to existing state-of-the-art NLPs?

• Can you provide real-world demos-sources, or are these just mockups made to convince non-technical audiences?

  1. Technical Infrastructure

• What infrastructure are you using for training and inference? GPUs? TPUs? How do you achieve real-time performance for emotional and facial analysis at scale?

• Are you using third-party APIs for transcription, emotion analysis, or face detection? If so, isn’t this just wrapper work being sold as “innovative AI”?

  1. Training Data and Methodology

• What training dataset powered this groundbreaking innovation? Was it proprietary, or did you depend on publicly available datasets?

• How are you addressing bias and fairness in facial and emotional recognition areas notorious for their inherent flaws?

  1. Transparency and Open-Source Contributions

• Why isn’t there a public repository (e.g., GitHub) to validate your claims of innovation? Open collaboration fosters trust.

• Are any components of Nerox AI open-source, or is everything locked away to avoid scrutiny?

Fraudulent Patterns in dApp Staking

Beyond the technical questions, let’s not forget the core issue here: Leo’s dApp staking activities have consistently demonstrated manipulative practices designed to extract funds without delivering tangible value. His Twitter posts serve as distractions, yet they fail to address the lack of transparency, accountability, and measurable results in his projects.

The council must act swiftly to protect the ecosystem from further exploitation. Leo’s refusal to answer these questions previously speaks volumes about his intentions. If his claims cannot stand up to scrutiny now, it’s the council’s responsibility to unregister his dApp from staking and uphold the integrity of the platform.

Transparency isn’t optional, it’s essential. We demand answers, not more marketing fluff.

Happy New Year 2025!

1 Like

@Leo Can you share your answers in this discussion?

Encouraging members to go to external sources to find answers to the above questions doesn’t help move the discussion forward.

3 Likes

As a council member, I’m not going to execute a delisting of Neurolanche just because one community member requested it.

Right now, the Community Council is focused on creating a Code of dApp Staking. This will include a list of situations where a project could be delisted. Once that framework is ready and the community agrees with it, we’ll start reviewing projects to see if any don’t meet the standards and need to be removed.

If Neurolanche isn’t following the code once we begin investigating projects, the Council could propose a vote to delist them. But at this moment, we don’t have a solid reason to do so.

I respect your opinion that Neurolanche isn’t legitimate for dApp Staking, but for now, you’ll need to wait for the Community Council to finalize the code before any actions are taken. If you’d rather not wait, you’re welcome to create an on-chain proposal on Subsquare to delist Neurolanche. ASTR token holders will then vote and decide.

Thanks for understanding!

2 Likes

If I’m asked for such a response in a kind and respectful manner, I’ll always share it, my friend. I will send a document with all the staking expenses. Additionally, I’ll also send a file containing the reports I’ve prepared so far.

Also with documentation gonna take video with explain:

How we are building than share here.

1 Like

Some valid questions have been raised about your project. While we’ll moderate any false accusations or toxic comments (according to the new rules of engagement), that doesn’t mean you can ignore legitimate questions from ASTR holders. Given your project has been heavily doing marketing around dApp Staking, received decent support from stakers, and been part of the program for a while now, it’s totally fair for the community to ask for some clarity.

The above comment is from @Gaius_sama in the DeStore de-listing thread…

So @Leo responds to @sota and everyone here, promising to finally address the concerns around his project, he gives a deadline of 24 hours which he then breaks…and now he is just straight up refusing to answer?

When @knacker65 and the Skylabs team were asked questions - they showed full transparency…and their reward is being (more than likely looking at the poll results) delisted as a result.

Now Neurolanche are being protected from scrutiny…why? @Maarten

The precedent has been set with Skylabs - if projects don’t offer (reasonable) transparency or address legitimate and founded concerns, they should automatically be delisted (imo).

If they offer transparency and the results arn’t satisfactory - then they face the vote.

DeStore are also answering the scrutiny in the other thread. Why are NL being given a free pass?

What more can we as a community do in regards this topic, when we are just stone walled at every turn.

The core team desperately want to increase voter turn out and increase participation here - well don’t let our concerns continue to fall on deaf ears.

2 Likes

@FFR23 I’m also asking Leo to provide answers in this discussion as mentioned here: ❇️ Neurolanche - Next Steps - #23 by Gaius_sama

1 Like

1-

• You’ve claimed to not rely on foundational models like GPT. Can you prove this? What is your custom NLP framework, and how does it work?

• If you’re not using ChatGPT or similar tools, explain why Azure AI Foundry is being name-dropped without providing architectural specifics.

There seems to be a misunderstanding here. The intention is not to rely solely on a single foundational model. Developing an entirely new large language model (LLM) from scratch would indeed be impractical. Instead, our structure combines various tools and approaches to deliver a robust and dynamic solution. Here’s how it works:

We rely on ChatGPT-4 for chat completions, integrated via OpenAI’s API. While this is not a fully custom NLP framework, we enhance its functionality by fine-tuning specific models using Azure AI Foundry. Azure AI provides the infrastructure for fine-tuning and training domain-specific models. This allows us to adapt pre-existing foundational models, like ChatGPT, to meet the unique requirements of tasks such as fitness coaching. Fine-tuning is conducted with meticulously curated datasets tailored to our application’s needs.

For audio, we use Google Text-to-Speech, from which we generate data for live synchronization, including lip, hand, and body movements for avatars. We utilize Ready Player Me avatars, and animations are custom-created from scratch using Blender to ensure seamless integration and a highly immersive experience.

For speech-to-text functionality, we utilize Flutter’s Speech-to-Text library, which integrates effectively with our mobile application framework.

For visual data, we leverage the Google Vision API for:

Face detection, enabling facial recognition.

Label detection, used for identifying objects in images.

By combining these technologies, we do not rely solely on a single foundational model but rather integrate and enhance multiple state-of-the-art tools. This approach ensures flexibility, adaptability, and a robust system tailored to the unique demands of our project. Azure AI Foundry is referenced here as the backbone for fine-tuning and scaling our models efficiently, leveraging its compute resources and integration capabilities.

2-

• Where are the benchmarks comparing your model’s performance to existing state-of-the-art NLPs?

While we are not developing a new foundational NLP model, we have conducted internal evaluations to measure the performance of our fine-tuned ChatGPT-4 model alongside other integrated technologies in our system. Our approach combines multiple state-of-the-art tools and APIs to address domain-specific use cases, and these evaluations focus on task-specific performance rather than direct comparisons with general-purpose NLPs.

Here’s how our benchmarks are structured:

Chatbot Fine-Tuning: We fine-tune ChatGPT-4 using Azure AI Foundry, adapting it to respond more accurately to spesific domain-related queries.

Visual Analysis: Google Vision API is evaluated for its precision in face detection and label identification.

Audio and Voice Processing: Google Text-to-Speech and Flutter’s Speech-to-Text are assessed for their effectiveness in real-time synchronization with avatars and transcription accuracy.

Avatar Integration: Benchmarks also include the smoothness of animations created using Blender and their integration with Ready Player Me avatars for an immersive user experience.

These evaluations allow us to ensure that each component delivers robust and reliable results when combined in the application. Our detailed benchmark report, including metrics like task success rates, user satisfaction, and precision/recall scores, will be shared after further testing.

• Can you provide real-world demos-sources, or are these just mockups made to convince non-technical audiences?

A functional beta version of the application, which includes some important features (chatbot, fine-tuning, image analysis, and voice interaction), will be released by the end of this month. This beta will be accessible to users who apply for participation, allowing them to experience the application’s real-world capabilities firsthand.

3-

• What infrastructure are you using for training and inference? GPUs? TPUs? How do you achieve real-time performance for emotional and facial analysis at scale?

Training Infrastructure:

We use Azure AI Foundry, which provides scalable GPUs and TPUs for fine-tuning our foundational model (ChatGPT-4) and domain-specific adaptations. Azure’s infrastructure ensures that our training processes are efficient and can handle the large-scale data required for fine-tuning.

Inference Infrastructure:

For real-time inference, our backend is designed to support low-latency API interactions with systems like OpenAI’s GPT models, Google Vision API, and Google Text-to-Speech. By optimizing requests and caching frequently used results, we minimize response times.

Real-Time Performance for Emotional and Facial Analysis:

Emotional and facial analysis is achieved using the Google Vision API, which is highly optimized for real-time face detection and label identification. To ensure scalability, we integrate these API calls with a backend infrastructure that efficiently handles concurrent requests. Additionally, our avatar animations, created using Blender and powered by Ready Player Me, are pre-rendered and dynamically updated for smooth performance.

• Are you using third-party APIs for transcription, emotion analysis, or face detection? If so, isn’t this just wrapper work being sold as “innovative AI”?

Yes, we use third-party APIs for specific tasks:

Google Vision API for face detection and label identification.

Google Text-to-Speech for generating natural-sounding audio outputs.

Flutter Speech-to-Text library for transcription.

Why It’s More Than Wrapper Work:

While we integrate third-party APIs, our innovation lies in how we combine and enhance these technologies to create a seamless and immersive user experience.

4-

• What training dataset powered this groundbreaking innovation? Was it proprietary, or did you depend on publicly available datasets?

Our approach combines both publicly available datasets and proprietary datasets curated for domain-specific tasks. Here’s a breakdown:

Publicly Available Data: We sourced datasets from platforms like Kaggle and Hugging Face for initial fine-tuning. Additionally, we extracted content such as transcripts from fitness-related YouTube videos and scientific studies using automated tools.

Proprietary Data: To tailor the model for tasks like fitness coaching, we created custom datasets by compiling and cleaning data from industry-specific sources, such as fitness journals, magazines, and expert-curated content. These datasets were formatted in OpenAI’s fine-tuning structure for maximum compatibility.

Enhancements: Data preprocessing and augmentation techniques were applied to ensure relevance, quality, and coverage of key topics.

• How are you addressing bias and fairness in facial and emotional recognition areas notorious for their inherent flaws?

We recognize the challenges of bias in facial and emotional recognition and take the following steps to mitigate them:

Diverse Data Collection: The datasets used for training and testing include images and emotional cues from diverse demographics, ensuring representation across age groups, genders, ethnicities, and cultural backgrounds.

Testing and Validation: Models are evaluated using balanced datasets to identify and address potential biases in their predictions.

Adjustments to Sensitivity: Emotion detection algorithms are fine-tuned to avoid overfitting or bias toward stereotypical expressions, ensuring a neutral and fair evaluation of facial cues.

Human Oversight: Critical outputs related to facial and emotional recognition are reviewed by domain experts to validate fairness and accuracy.

Transparency: Regular audits and documentation of the datasets and model adjustments are maintained to ensure accountability.

5-

• Why isn’t there a public repository (e.g., GitHub) to validate your claims of innovation? Open collaboration fosters trust.

• Are any components of Nerox AI open-source, or is everything locked away to avoid scrutiny?

While we strongly believe in transparency and the value of open collaboration, certain aspects of Nerox AI are currently not open-source due to proprietary considerations and competitive market dynamics. The project includes several custom integrations and fine-tuned models that are critical to maintaining our competitive edge.

That said, we are open to collaborating with trusted partners, such as adding the Astar Core team to our private GitHub repository.

Fraudulent Patterns in dApp Staking

We welcome scrutiny and transparency. However, the claim of manipulative practices in our dApp staking activities is unfounded.

NEROX AI Budget and Transparency

The NEROX AI project operates with a budget of approximately $100,000, fully detailed and accessible via Notion. This demonstrates our commitment to transparency and delivering real value to the ecosystem.

Notion link: Notion – The all-in-one workspace for your notes, tasks, wikis, and databases.

Through this project, we aim to bring cutting-edge AI capabilities into the blockchain ecosystem, setting a benchmark for innovation and integration. NEROX AI is a testament to our dedication to advancing not only our platform but also the broader Astar ecosystem.

Neurolauncher NFT Staking Rewards

The Neurolauncher NFT collection has established itself as a cornerstone of Astar’s WASM ecosystem, becoming the only NFT collection generating significant volume. To date, it has achieved a trade volume of 2,835,553 ASTR, beginning with an initial mint price of 1,000 ASTR, and now boasts a floor price of 8,400 ASTR. This remarkable growth underscores the value and impact of the collection within the ecosystem.

Beyond its financial success, the Neurolauncher NFT collection has been instrumental in fostering a thriving NFT culture on Astar. By introducing users to the potential of WASM-based NFTs, we have cultivated a vibrant and engaged community that actively contributes to the platform’s growth and adoption. This community-driven approach ensures long-term sustainability and value for both Astar and its users.

In addition, the Neurolauncher NFT staking rewards, distributing approximately 46,000 ASTR monthly, are fully on-chain and verifiable. These rewards provide direct and tangible benefits to our users, further solidifying our contributions to the ecosystem.

Neurolauncher NFT Collection: https://astar.paras.id/collection/bYLgJmSkWd4S4pTEacF2sNBWFeM4bNerS27FgNVcC9SqRE4

X (Twitter) Golden Tick

Our verified presence on X (Twitter), (x.com) through the golden tick verification ($650), ensures clear and credible communication with the community, reinforcing our commitment to openness.

These verifiable contributions and transparent practices exemplify our commitment to advancing Astar Network’s ecosystem while maintaining accountability. Claims suggesting otherwise lack substance and fail to recognize the real and measurable value we provide.

3 Likes