Labs is run by SocioFi’s founding team with rotating contributions from across the engineering organisation. Everyone who ships production AI systems also contributes to understanding them better.
SocioFi was founded by two BUET engineers who believe the best technical research happens when the people doing the research are also the ones building and maintaining production systems.
"I've been debugging AI-generated code since before it was cool."
Arifur co-founded SocioFi Technology after recognising a gap between what AI coding tools could generate and what it took to ship those outputs as production software. At Labs, he leads applied AI research — focused on where AI systems fail in real-world use and what it takes to make them reliable enough to trust with consequential tasks. His research interests are practical rather than theoretical: he cares about the question "does this work in production?" more than "does this work in a benchmark?" He writes about agent reliability, industry automation, and the liability and ethical dimensions of deploying autonomous software systems.
"Good research starts with admitting you don't know the answer yet."
Kamrul leads developer tooling and system architecture research at Labs. As CTO, he is responsible for the technical direction of every system SocioFi builds — but Labs is where he gets to explore questions without a client deadline attached. His research focuses on the engineering problems that come with running AI in production at scale: how to make agent pipelines observable, how to handle failures gracefully, and how to build developer tooling that makes working with AI-generated code less painful. He is the primary author of most technical articles published on the Labs blog, and the architect of the reference architecture documented on this site.
Labs research draws on the collective experience of every engineer at SocioFi. We rotate contributors across research streams to maintain diverse perspectives — the engineer who spent six months on a client’s legacy codebase brings different insight to an observability experiment than the one who just shipped a new AI pipeline.
Contributors are credited in every article and research note they contribute to. Research credit counts toward performance evaluation. We do not have a two-tier system where Labs is the “real” engineering and Studio is the “day job” — both matter, and the best Labs research comes from engineers who are actively building.
We occasionally work with external researchers on specific experiments. We are selective — we only partner on problems that are directly relevant to what we are building — but when the fit is right, we commit properly.
We occasionally partner with external researchers on specific experiments — particularly around agent reliability, evaluation methodology, and AI system failure modes. We can offer access to real workload data, engineering time, and co-authorship on published findings.
We have a strong interest in connecting with researchers at universities working on AI reliability, software engineering automation, and human-AI collaboration. If you are looking for an industry partner for a research collaboration or dataset access, reach out.
If you write or research in the areas we cover — AI systems, software engineering, developer tooling — we are open to guest contributions to the Labs blog, co-authorship, and collaborative experiments.
Some of our research touches industries we do not have deep expertise in. We occasionally bring in domain experts — healthcare, finance, logistics — to evaluate AI system behaviour in their context.
Send us a note with your research context. We respond to every genuine inquiry, even if the answer is “not right now.”