Research Provenance — Representational Alignment & Brain-AI Alignment · All Human-Generated (Authored)

Author: Sushma Anand Akoju  ·  sushmaanandakoju.github.io  ·  CC BY-NC-ND 4.0

This page documents the provenance and development timeline of Plaintiff's research on Representational Alignment and Brain-AI Alignment — from the original position paper submitted to CogInterp @ NeurIPS 2025, through the deeper completed work accepted at AAAI 2026 and the 7th International Conference on the Mathematics of Neuroscience and AI (Rome, June 2026). All work is entirely human-generated, authored independently by Sushma Anand Akoju, and built upon documented prerequisite study spanning multiple years. The progression from rejected position paper to accepted conference work demonstrates sustained independent research development under adverse institutional conditions.
Phase 3 — Completed Deeper Work
February 2026
Accepted — 7th International Conference on Mathematics of Neuroscience and AI · Rome, June 2026
The work was subsequently accepted at the 7th International Conference on the Mathematics of Neuroscience and AI, with a poster presentation scheduled for June 9–12, 2026 in Rome, Italy. This second acceptance occurred after the author's dismissal from UNH, further establishing the independent quality and standing of the research.
December 2025
Accepted — AAAI 2026 Bridge Program · While Enrolled at UNH
The completed work was accepted at the AAAI 2026 Bridge Program (Logical and Symbolic Reasoning in Language Models — LMReasoning) in December 2025 — while the author was still enrolled at the University of New Hampshire. This acceptance occurred during the same period in which UNH was processing the author's EEOC charge and ultimately dismissed her in February 2026 citing "no academic progress." The AAAI acceptance directly contradicts this characterization.
Fall 2025 – Spring 2026
Finding Consensus from AI Alignment Studies: A Short Survey · Human-Generated (Authored) · Accepted
After completing the nine prerequisite areas above and incorporating reviewer feedback from the CogInterp submission, the author completed a substantially deeper and more rigorous version of the work. This survey synthesizes alignment literature to identify where AI systems diverge from human moral reasoning, emotional intelligence, and relational context. It builds directly on the author's independent study in cognitive neuroscience, representational alignment, and formal logic. The work was produced independently without institutional advisor support — UNH had removed the author's advisor in September 2024 and denied enrollment in preferred courses and ADA accommodations during Spring 2025.
Phase 2 — Prerequisite Study Completed Before Deeper Work
2021 – Fall 2025
Nine Areas of Prerequisite Study — All Human-Completed
Before completing the deeper version of this work, the author completed the following prerequisite courses, research reading, and independent study. These are documented in the author's research wiki and academic record. All study was self-directed and human-completed — no AI tools were used as substitutes for learning.
1. Deep LearningDeep Learning architectures — theory and implementation
2. Theoretical LogicFormal logic, theorem provers, first-order logic
3. Theory of ComputationAutomata, Grammars & Theory of Computation (CSC 573, Prof. Giacobazzi, U Arizona)
4. Neuroscience of Language, Vision & AudioNeural mechanisms underlying language, vision and auditory processing
5. Cognitive Science Approaches to AICognitive science models and their intersection with AI systems
6. Neuroscience of ReasoningNeural basis of logical reasoning and inference
7. Symbolic Encodings in Deep LearningSymbolic representations in deep learning and cognitive systems
8. The Role of Symbols (Tom Mitchell)Prof. Tom Mitchell's work on symbolic representations — long-term mentorship
9. CIRTL & Teaching as ResearchTeaching-as-Research methodology (CIRTL Scholar Level III, Spring 2025) — informing pedagogical framing of alignment work
Phase 1 — Original Position Paper
July 2025
Position Paper — Submitted to CogInterp @ NeurIPS 2025 · Human-Generated (Authored)
A position paper on interpreting cognition in deep learning models was submitted to the First Workshop on CogInterp: Interpreting Cognition in Deep Learning Models at NeurIPS 2025. The paper was not fully formed at this stage — it represented an early articulation of the author's ideas at the intersection of cognitive science, neuroscience, and AI alignment. The paper received review comments which the author used to deepen and complete the work. This submission was made while enrolled at the University of New Hampshire.
Provenance Statement
This research represents a documented progression from an early position paper (CogInterp @ NeurIPS 2025, rejected) through sustained independent study across nine prerequisite areas to a completed work accepted at two international venues (AAAI 2026 and Rome 2026).

All work is entirely human-generated and authored independently by Sushma Anand Akoju. The progression demonstrates exactly the kind of iterative research development that constitutes academic progress — directly contradicting the University of New Hampshire's characterization of "no academic progress" during this period.

The deeper work was completed without institutional advisor support, without ADA accommodations that had been approved and then denied (February 20 – April 1, 2025), and under conditions of housing insecurity and immigration uncertainty following UNH's EEOC-related dismissal. The two international acceptances stand as independent external validation of the author's research quality and productivity.

Full research wiki: sushmaanandakoju.github.io/representation-alignment-works
For IP verification: sushma.ananda13@gmail.com