Mattia in the Loop

How to Accidentally Wipe a Database Using a Coding Assistant

Published 2025-07-01 by Mattia5 min read

The story of a real, firsthand experience using AI in software development, losing control over one's own project and the resulting consequences. And, of course, the lesson learned.

How to Accidentally Wipe a Database Using a Coding Assistant

A few days ago, I was conducting some experiments with coding assistants on a use case involving translating an API layer from Scala + Play Framework to Kotlin + Spring Boot. My goal was to understand how much of this activity could be automated using AI and what the quality level of the result would be.

First Attempt: A Predictable Disaster

Despite knowing it wasn’t the right approach, the first thing I tried was launching a monolithic and rather generic prompt. Something like: Translate the entire XXX project from Scala + Play Framework into a new project from scratch in Kotlin + Spring Boot. The result, even after several iterations, was predictably terrible. The created project didn’t compile, was incomplete because the coding assistant couldn’t translate the entire source project in a single step, and was far from being correct and of good quality.

In particular, the destination project had generated a series of irrelevant details and lacked a clear structure that could serve as the system’s backbone. Therefore, it couldn’t even be considered a good starting point to build upon. Reverted.

Second Attempt: Some AI-Assisted Coding Techniques

In a second attempt, I tried to apply various techniques that generally greatly increase the effectiveness of using AI in software development. I mentioned some of these in a previous article: 8 techniques for effectiveAI-assisted software development.

The result in this case was better, but still not satisfactory from a quality standpoint. With small successive tasks, I managed to ensure project coherence and consistency, which this time compiled after a few iterations. The rework was minimal and some functionalities were translated 100%.

The main problem, in this case, was the literal translation of abstractions present in the source framework, Play, into the destination framework, Spring Boot. The AI was able to functionally translate some parts, but it did so by replicating Play concepts without translating the abstractions and best practices into their Spring Boot equivalents. In some cases, it failed to understand that it could use third-party libraries that would have simplified the project, with a constant tendency to write code from scratch instead.

Third Attempt: More Control Over Setup and Architecture

The project could have been salvaged with some manual rework, but for the sake of experimentation, I sought a third way. I started by scaffolding a Spring Boot project from scratch, partly through AI and partly manually, building a solid and well-structured foundation. I then analyzed the main differences between the two frameworks and anticipated some strategic decisions with ADRs that I formalized in Markdown documents to include in the context.

At this point, I roughly followed the workflow of the second attempt, but maintaining much more control over the project, conducting more thorough reviews and in some cases implementing parts manually. The goal was to arrive at a skeleton that satisfied me, then proceed with translating individual APIs with more confidence and more automatic generation.

The result was decidedly better, although the setup phase required much more time, probably just slightly less than what I would have spent without an assistant. However, the structure and especially the documentation produced allowed me to significantly accelerate the second phase of translating individual APIs.

The Incident

In my experiment, I followed a roughly TDD approach, contrary to what was done in the original project which was completely devoid of unit tests. The destination project used JPA and Hibernate for data access, and the assistant correctly configured a test setup using PostgreSQL and Testcontainers.

In this configuration, it set a particular parameter: jpa.hibernate.ddl-auto = create. This causes all database tables to be emptied and the structure defined in the code to be enforced at every startup. Since the tests were configured to run on a sandbox database, this made perfect sense.

I had personally reviewed the configuration produced by the coding assistant, but that setting hadn’t caught my attention. I had also asked the assistant to double-check that there were no queries that would modify or corrupt data in the tests. Nothing emerged because there effectively weren’t any in those terms.

At some point, I decided to run the first tests on a remote test database, already populated with relevant data and also used by colleagues. Since I hadn’t yet implemented any write APIs, I felt quite confident that there would be no impact on the database (which, in any case, wasn’t production). Yet… it happened. As soon as the app started, Hibernate deleted all tables in the remote database, and I had to restore them from a backup, causing some disruption for colleagues in development.

Control and Isolation

From this incident, though without consequences, I learned several things:

In general, I understood that by relying heavily on AI, you risk losing full understanding and total control of your own system, and this easily becomes a source of errors, even serious ones. The two weapons against this problem are prevention, through testing and strict configurations, and continuous and thorough review of the most critical parts.

I will obviously continue experimenting, but with more caution. Lesson learned.