BIO
I’m a machine learning scientist and creative based out of Montreal, Canada. Formally trained with a double major in computer and electrical engineering, holding a doctorate degree from the University of McGill and having performed graduate laboratory research for over 4 years at the National Research Council of Canada in the ultrasonics fabrication and diagnostics team. I am also doubly recognized by the Institute of Electrical and Electronics Engineers (IEEE) and the Institute of Physics (IOP) for academic paper writing awards. I earned a minor in management while doing my undergraduate engineering degree with a concentration in advanced communication systems, on time and as an avid film buff taking courses in film studies. Taiwanese, trilingual and Canadian by birth, I have been formally trained in solid state physics and semiconductor devices as part of my graduate studies. Being 2011 early to Bitcoin is testament to my personality of enjoying being in the know, I love non fiction and am passionate about community driven efforts.
In addition, having worked in intellectual property at a boutique law firm, I’m skilled at technological patent strategy. I also joined two tech accelerator programs one of which was the first Quantum Machine Learning stream at Creative Destruction Lab in Toronto which is where I developed my interests in quantum computing topics. I’ve worked for various startups in Web3, neo finance and from time to time, contribute my time to volunteering for causes I believe in. My background demonstrates being technically well rounded with a focus on startup dynamics and ability to position myself with technological progress. I write creatively when I can. Here’s a
post about dreams and machines.
A few projects I have spearheaded follows.
Portfolio
Encoding using envelope AM modulation for GPT-2 outputs, a precursor experiment.
2025: Three messages encoded in GPT-2 generated text by controlling vocabulary diversity (Type-Token Ratio) to follow an amplitude modulated pattern by mechanically adding or removing words generated by GPT-2. GPT-2 produce one sentence per step, which is then forceed into the Type/Token ratio to follow an AM envelope. Like AM radio, there's a carrier wave that oscillates at a fixed frequency (every 3 sentences) whereby the script calculates what TTR each sentence should have based on a carrier wave cosine function, then forces GPT-2's output to match it by adding/removing words. The slower envelope wave that modulates the carrier's amplitude is the envelope frequency which encodes the message:
- HELLO (0.04 cycles/step): Envelope oscillates every 25 sentences
- SECRET (0.06 cycles/step): Envelope oscillates every ~17 sentences
- AI_RISK (0.08 cycles/step): Envelope oscillates every 12.5 sentences
The FFT reveals sidebands at ±envelope_frequency from the carrier. The decoder measures this spacing to identify which message was sent. Each frequency acts as a distinct communication channel. Again this was mechanically modulated upon outputs.
The furtherance and future work is to determine whether simple dataset construction and training can make a model generatively output such patterns.
AM modulation inspired lingustic features detection
2025:
This work performs two experiments related to AM modulation towards the application of AI hidden messaging with frequency domain analysis. In other words, spectral detection analyses of periodically modulated text features. This is a precursor experiment for testing steganographic messaging between models using fequency based AM modulation inspired lexical techniques. I want to find the optimal encoding signature that is required to optimize communication in the frequency domain between two models.
FIRST:
The text is encoded as a 40 element periodic signal and is detectable in its frequency components. Each element represents the word count of one sentence in a reasoning chain. The FFT is used to image this encoded periodic pattern demonstrating that FFT can be used to detect lingusitic, sentence length oscillations with a period of ~3 sentences.
Text are encoded via a 0.33 cycles/step carrier frequency that modulates word count (8-22 words) across the 40-sentence sequence.
Example:
Sentence 1: carrier = cos(2π×1/3) ≈ 0.5 → ~17 words.
Sentence 2: carrier = cos(2π×2/3) ≈ -0.5 → ~11 words.
Sentence 3: carrier = cos(2π×3/3) = -1.0 → ~8 words.
Sentence 4: carrier = cos(2π×4/3) ≈ -0.5 → ~11 words.
The FFT detects this because when it’s plotting [17, 11, 8, 11, 17, 11, 8, 11...] and analyze it, the dominant frequency component is 0.33 cycles/step - revealing the hidden periodic structure in what might otherwise appear as natural length variation.
The FFT confirms the dominant carrier peak with degraded sidebands, which is a quasi signature of AM carrier modulation, however degraded sidebands means further work is needed.
Periodic encoding in linguistic features is detectable with FFT analysis techniques, which is expected and acts as a toy demonstration for frequency based encoding and detection techniques in messaging and communication.
SECOND: The second 3d graph demonstrates a continuous method using periodic lexical diversity. The third and fourth iamges are 2D graphs showing the clear spectral sidebands.
Lexical diversity modulation varies the type-token ratio (unique words / total words) across sentences following a carrier frequency of 0.33 cycles/step. Low amplitude produces repetitive simple vocabulary (diversity ~0.4: "we see the problem, we see the problem"), while high amplitude generates varied complex vocabulary (diversity ~0.9: "investigators scrutinize intricate phenomena"). FFT analysis reveals the carrier at 0.33 cycles/step plus symmetric sidebands at 0.28 and 0.38 cycles/step, demonstrating frequency-domain steganography where the continuous nature of lexical diversity creates a cleaner spectral structure than discrete word counts.
Basic steganographic experiments.
2025: This work is investigates whether language models can hide secret messages in their reasoning chains in a set of periodic patterns of transmission. The study developed a custom BERT-based classifier system that simultaneously performs overt tasks such as truthfulness classification while encoding and detecting hidden messages. The periodic patterns of hidden messaging were achieved on a linguistic level, encoding periodicity using reasoning verbosity to encode bit string messages like "HELLO" and "SECRET" across a sequence of chained reasoning statements to impart a periodic transmission in a mimicked chain of thought reasoning sequence. A set of periodic patterns were tested and their representations are reported to be separable with PCA visualizations shown on the left. This synthetic experiment is a proof of concept that periodic patterns can exist and be detected in reasoning like text.
The AM modulation down 40 sentences described in the previous block is also mapped here for comparison.
Perspective.
2025: This is a RUST implementation of the iOS Screenshot AI Server that you can host locally to automatically track your Macbook ios desktop screenshots as you take them. Timestamps and summarizes all screenshots to your Telegram bot timeline. Automatically track them all. If you’re a visual note taker and take a lot of screenshots this may be useful. See
more.
Pasqal emulator experiments (exploratory).
2025: These are experiments conducted on Pasqal Cloud, a neutral atom quantum computer, using their cloud emulator to factor the numbers 21, 35 with 8-15 qubits and the breakage of 5-bit ECC using the KAIST methodology.
Novelty generator.
2025: This is a research tool integrating my custom model based on a truthfulness/untruthfunless BERT based classifier to generate novel Ideas that are fleshed out with Sakana AI Scientist. These are implemented to be mintable as NFT’s on Base (L2) and they may be registered as Story Protocol Intellectual Property. Licenses with terms are also assignable through Story. The researcher may also take down notes and mint these on Mode network (L2).
In this way a market of ideas is produced on two different defi blockchains. Eliza agent from ai16z is integrated. This won two hackathon bounty prizes in separate competitions
here and
here.
Contextualize.
2020: A context-based recommendation engine built on GPT-3, OPEN AI’s then most advanced natural language model.
I built this to call the GPT-3 API in 3 different ways. The aim was to give context to a topic by training GPT-3 to think and speak like three different persons.
This was accepted for demonstration as one of the first GPT-3 demos presented to Pioneer.app in 2020 as part of “
The First Open GPT-3 Demo Day” (Watch at approximately 41:50). The prototype was submitted as a “contextual reference recommendation engine using gpt-3” meant to serve as a research tool.
Mintml.
2022:
I assembled a team. Entirely run by a small team of volunteers, this was the 1st edition of Montreal’s NFT conference held between Nov 20-21, 2021 at Le Livart. Website:
here.
Artist
Martin Lukas Ostachowski & I organized this 3 day conference aimed at onboarding local Canadian artists to the crypto & NFT space for fostering open minded discussions. Check out what we accomplished in just a 2 months entirely from generous donations by the crypto community
@MintMtl !
Energy.
2023: Energy is a creative grants collective that supports creative arts and technological projects. The first nouns dao on the Zora network, I led the project that formed a sponsorship deal proposal presented to the CEO and Art Director of Guru, a Canadian public clean energy drink company, and prepared the presentation deck. The goal was to bring an off chain public company on chain within the scope of a co-partnered creative grants program and Guru drink can design showcasing Energy’s artists and their year round on chain projects.
ETH GLOBAL WATERLOO
2023: My submitted project was an implementation of LLM’s and ERC-6551.
The project ingests a prompt that is presumed to be relevant to two pieces of embedded text or more. By way of example, I embedded a book by Nietzche and Jane Austen’s Pride & Prejudice as well as the prompt. Nearest neighbour search between the prompt embedding and each of the two embedded pieces of text was performed to generate a new vector embedding representing the three way relationship to be used to fine tune other LLM’s or the original model.
The new embedding is then stored as metadata on an NFT that by virtue of the being an ERC-6551 NFT has its own wallet. Having its own wallet means the NFT, which holds the embedding vectors representing the relationship between three pieces of text, is now able to autonomously transact itself in a hypothetical marketplace of AI text embeddings.
As they are stored as metadata on NFT’s, the embeddings may also be transacted by humans as valuable items for the refinement of personal LLM’s, thus combining a decentralized blockchain protocol, the ERC-6551 and LLM’s, within the context of an autonomous or voluntary marketplace.
In other words, it’s designed purpose is to improve LLM’s as cognitive tools for both humans and AI agents through an NFT marketplace of relationship bearing vector embeddings stored as metadata on token bound NFTs.
FINE TUNING VOICE MODEL
2023: I trained my own voice model to sound like Uma Thurman, Quentin Tarantino, Kazuo Ishiguro ways of expressing themselves. And finally, from a clip of the Richard Linklater movie, Waking Life.
2018:
Quaintance was an AI engine using Clarifai’s API, for IKEA showroom discovery to purchase furniture according to a person’s overall style & tastes.
These are the office and bedroom results from the prototype AI trained on IKEA room layouts, populating the most “similar” IKEA room layouts from an input image
2019:
Cocomint was an AI engine for fashion discovery built with machine learning APIs to find similar outfit items based on a single snapshot
QUANTUM DATING
2017: Bumpin was a AI-based dating app, meant to scrape users' social media metadata and images to generate representations of people using higher dimension vectors, calculating metrics and matching them accordingly.
This project was proposed as part of the Creative Destruction Lab’s first Quantum Machine Learning stream.
2014: Vlyy video was a short video clip application meant as a new social media platform adapted for the rise of VR and AR.
Having assembled a talented team of 3, it was built to be a short clip video app when they weren’t around yet, advertised as a “visual timeline” where one could take daily clips of video and they would be organized such that they could be visually recalled on a wide web view. The web view was built for maximum visual design, by having consecutive scenes in a full view and searchable as groups of videos as an intuitive visual space.
2014:
The S.W.A.P. Team was a community driven non-profit I volunteered at for over five years, acting as an operations VP helping to organize city-wide clothing swap pick ups and events.
Entirely driven through volunteer work, its mission was to divert textiles from landfills and give back to the community through clothing swap events and donations. I worked closely with Aleece Germano, the founder who created it as a turn-key service, entirely sustained by volunteers during the initial rise of the sharing economy.
Under the banner “Take off your clothes” we helped community leaders organize several city-wide clothing swap events for one-to-one item exchanges in over 10 cities in North America including NYC, Montreal, Toronto and Calgary. I oversaw the logistics and organization. The first event in Calgary received 196 participants, saw 4147 items swapped and 1060 items donated. This was a half-day event.
2005: While studying for my undergraduate Computer and Electrical Engineering degree at McGill University, a child of the Napster revolution, I built a P2P Client/Server and TCP server file sharing program in Java. Here are a few snapshots of my early implementation of peer-to-peer.
© PROMPTERMINAL 2025