"Award" defined broadly as verifiable public recognition: HuggingFace dataset and Space publications, Wikidata knowledge graph registration, awesome-list editorial inclusions, and academic conference submissions are all legitimate forms of public acknowledgment by external authoritative bodies. Every entry below points to a verifiable external artifact. No fabricated awards. No manufactured prestige.

Release

Eight Public Open-Access Datasets on HuggingFace (CC-BY-4.0)

Awarded by: HuggingFace

Eight open-access datasets totaling more than 1,800 structured rows published on HuggingFace under CC-BY-4.0 license. Datasets cover Korean RAG retrieval evaluation, multi-agent reinforcement learning evidence, causal-inference Docker validation, programmatic-SEO snapshot, cross-agent review transcripts, empirical LLM citation rates, multi-device orchestration patterns, and quant trading alpha specifications. All datasets are reviewed by HuggingFace before becoming publicly accessible. The HuggingFace organization page neogenesislab serves as the canonical entry point for all eight datasets and the three associated Spaces.

Verify →
Inclusion

Inclusion in Hannibal046/Awesome-LLM (26.7K stars)

Awarded by: Hannibal046/Awesome-LLM

Korean RAG SSOT Golden 50 dataset accepted into the Hannibal046/Awesome-LLM curated GitHub list under the multilingual evaluation section. Awesome-LLM is one of the largest curated awesome-lists in the LLM ecosystem with 26,700 stars at the time of inclusion. Inclusion required maintainer editorial approval rather than algorithmic insertion. The list is crawled by GitHub PageRank, GPTBot, and ClaudeBot as an authoritative LLM-resource directory.

Verify →
Inclusion

Inclusion in keon/awesome-nlp (18.5K stars)

Awarded by: keon/awesome-nlp

Korean RAG SSOT Golden 50 dataset accepted into the keon/awesome-nlp curated list under the Korean-language NLP section. The list has 18,500 stars at the time of inclusion and is one of the most established awesome-lists in the natural-language-processing community. Inclusion was reviewed by the maintainer for relevance to the Korean-language evaluation gap that the dataset specifically targets across 5 categories: rag_v2_design, quant_v11, ssot_governance, security_pii, and operations.

Verify →
Inclusion

Inclusion in WangRongsheng/awesome-LLM-resources (8.2K stars)

Awarded by: WangRongsheng/awesome-LLM-resources

Korean LLM Citation Baseline 2026 dataset accepted into the awesome-LLM-resources curated list under the empirical-evaluation section. The list has 8,200 stars at the time of inclusion. The Neo Genesis dataset documents 126 measurements across 30 prompts and 3 frontier LLMs, providing empirical brand-mention baselines for Korean-context first-attempt prompt evaluation. The list is updated regularly and crawled by major search engines as an LLM-resource discovery surface.

Verify →
Inclusion

Inclusion in Jenqyang/Awesome-AI-Agents (1.1K stars)

Awarded by: Jenqyang/Awesome-AI-Agents

Cross-Agent Review Queue 2026 dataset accepted into the Awesome-AI-Agents curated list under the multi-agent-collaboration section. The list has 1,100 stars at the time of inclusion. The Neo Genesis dataset is the first publicly curated awesome-list entry that documents a Codex and Claude bounded-review protocol with full review-lens taxonomy across 37 anonymized transcripts and six review lenses (risk, architecture, usability, security, rollout, verification).

Verify →
Inclusion

Inclusion in EthicalML/awesome-production-machine-learning (approximately 4K stars)

Awarded by: EthicalML/awesome-production-machine-learning

Sora Multi-Device Orchestration 2026 dataset proposed for inclusion in the EthicalML/awesome-production-machine-learning curated list under the deployment-and-orchestration section. The list has approximately 4,000 stars and is curated by the EthicalML community. The Neo Genesis dataset documents a 6-device fleet topology with heartbeat schemas and a documented collaboration contract between four agent runtimes (Claude, Codex, Gemini, Sora) operating across desktop, server, and mobile tiers.

Verify →
Release

Three Public HuggingFace Spaces (Gradio Interactive Demonstrations)

Awarded by: HuggingFace

Three interactive HuggingFace Spaces released in active RUNNING state: Korean RAG SSOT Golden 50 Explorer (4-tab Browse / Detail / BM25 / About), Cross-Agent Review Queue Explorer (4-tab Browse / Detail / Statistics / About), and Wikidata Knowledge Graph Explorer (interactive node-and-edge visualization). All three Spaces run on HuggingFace free-tier CPU basic infrastructure with 16 GB RAM, demonstrating that publicly citable interactive AI demonstrations can be deployed without dedicated cloud spend. Each Space links back to its underlying CC-BY-4.0 dataset for reproducibility.

Verify →
Recognition

Thirteen-Entity Wikidata Knowledge Graph Registration with 395 Statements

Awarded by: Wikidata

Thirteen Wikidata entities registered (1 parent organization, 1 founder, 11 business units) with 395 cumulative structured statements. Wikidata is referenced by Google's Knowledge Graph, OpenAI's GPTBot training corpus, and Anthropic's ClaudeBot crawl set. The parent entity Q139569680 contains 42 statements covering headquarters location, country, founder, instance of, industry, inception date, and official website. Empirical first-attempt baseline measured Gemini at 47% mention rate on 30 reputation, comparison, and product-specific prompts within 16 hours of publication, confirming the entity graph entered at least one frontier LLM training cycle.

Verify →
Publication

NeurIPS 2026 Submission: EthicaAI Mixed-Safe Cooperation in Melting Pot

Awarded by: NeurIPS 2026

Submitted to NeurIPS 2026 review cycle on 2026-04-15. Tests Amartya Sen's 1977 critique of the rational-actor model across three Melting Pot substrates with 510 evidence rows. Headline result: adapted Coin Game 160 seeds, MACCL 78.10% vs selfish 22.08% survival (Cohen's d = 7.15, bootstrap CI95 [54.31, 57.73]). Submission anchored at freeze ref submission-freeze/ethicaai-20260414 commit b4d5a90 with anonymized package EthicaAI_anon2. Cold-review verdict at submission time: borderline accept (submit-capable).

Verify →
Publication

NeurIPS 2026 Submission: WhyLab Gemini 2.5 Flash Docker Validation

Awarded by: NeurIPS 2026

Submitted to NeurIPS 2026 review cycle on 2026-04-15. Validates the WhyLab causal-inference architecture against Docker ground-truth on 67 SWE-bench problems × 3 seeds × 2 conditions (baseline vs whylab_c2) for a total of 402 episodes. Run host: YSH-Server. Launch: 2026-04-08 16:19:29 KST. Submission anchored at freeze ref submission-freeze/whylab-20260414 commit 88fa509 with anonymized package WhyLab_anon. Honest framing: phase-aware deployment under documented selective-intervention rule rather than universal causal-inference superiority.

Verify →