How to Roll Your Own File Uploads in Golang

jason on October 20, 2025, 09:46 PM

AWS being down got you down? Well no worries, today I’m gonna show you how to integrate file upload functionality into your Golang back end without relying on AWS S3 or any third party services. As long as you have your code, a machine to run and serve it on (virtual or physical), and the internet is up, you’ll have the ability to upload and view files in your web app! (Decentralization FTW!)

Ok, so you want your Golang web app back end to support file uploads, not just text content. How does this work? Well first off, it doesn’t really matter which type of file you’re going to allow to be uploaded, it’ll work the same way. For good design though, you’ll probably wanna limit your front end to only accepting certain file types (one’s you’ll have the appropriate logic to present back in the UI when requested) and also duplicate that logic on the back end so that clever hackers who know how to bypass the front end validation don’t slip things past you (especially malicious files!) but more on this later.

We’re Gonna Do This With the NET/HTTP Package

Right of the bat, let’s get one thing straight. How you do this will depend on which web server package you decide to use (or if you even decide to use one at all). But in this scenario, I’m going to demonstrate using Go’s NET/HTTP package mostly because it’s pretty simple and straightforward and is maintained by the Core Go Team. So let me break down mainly what we want to accomplish here:

  1. Check whether or not a file was uploaded. Maybe we want to allow file uploads to be optional, in that case we don’t want to run any of the code that processes an uploaded file if there isn’t one to process. Maybe we want to make uploads mandatory, in which case we’d want to immediately detect whether or not a file was sent along with the request so that we can send an error back to the client if there wasn’t one. Either way, knowing whether we have a file or not is good info to have!
  2. Ensure that a directory exists to upload the file into. What good is receiving a file if there’s nowhere to put it? It might seem trivial to ensure that the uploads directory exists, but if it’s the first time this app is run on a new deployment, the directory might not have been created yet. Also, since our app is ‘Dockerized’ we’ll need to have a way to make sure that directory stays around even if the container is stopped. If we just save files to a directory inside the Docker container, they’ll disappear when the container does. I’ll explain more on that later.
  3. Create the file on our server. Remember, what has been uploaded is technically the ‘contents’ of a potential file. It doesn’t become an actual file in a directory until our app creates that file for the contents to live in. For example, when you create a file on your computer using notepad or something, you might create a file called something.txt then you add contents to that file and save it. Basically the same idea.
  4. Copy the contents of the file from the request object into our newly created file. Pretty self explanatory. This is the step where our file actually begins to exist on the server.
  5. Let Go know we’re done so it can free up the resources used to create the file. We don’t wanna waste precious memory!
  6. Save our new file’s path string in the database so we can retrieve it along with a post later.

Doing all this will give us the ability to save uploaded files, but we’ll also need the ability to serve them back to the client. Here’s what we need to do to accomplish this:

  1. Add a ‘file’ input on the front end that accepts uploads and sends as multipart form data.
  2. Add a handler to the uploads directory so we can serve files from it.
  3. Create an Img tag in our front end that will have the file’s path/url (in this case we’re using images) as it’s src attribute so that it can display the image. Of course you’d adapt this to different file types, video, audio, pdf. etc. by using different HTML tags.

One important note if you are using Docker to deploy!!! You might find that you deploy your app and it’s working… until you stop the docker container and restart it. All of a sudden, all your files will point to broken links! The reason is that Docker apps basically live in virtual space on your machine. When the container stops running, all the directories that went along with it also vanish. But don’t worry, there’s a pretty simple and straightforward way to remedy this. You’ll need to use Docker’s Volumes feature to persist data to a directory outside of the docker container. Volumes are really flexible because they’re basically directories that you can create and ‘mount’ anywhere inside of your application. You can even create volumes on remote hosts using SSHFS with volume drivers like vieux/sshfs. You can create and manage them via the docker cli or in a Docker Compose file, which I’ll demonstrate later in the post. One small potential downside of Volumes though is that they are not accessible by anything other than the Docker container. This means that if you wanted to save files in a volume and access them directly by the host they’re stored on, you won’t be able to. But for that use case, Docker has bind mounts which you can use instead. Bind mounts aren’t quite as efficient as Volumes but are great if you need to access the data from something other than the Docker container that created it.

Now that we have a broad overview of what we need to accomplish to get the basic infrastructure of the file uploads feature working, I’m gonna list a few other things that really should be done in order to make this feature ‘production ready’. I won’t demonstrate how to implement them in this post though, I’ll save that for another.

  1. Put validation on the front end
    1. Limit the amount of files that can be uploaded at once
    2. Limit the file size of any individual file or all simultaneously uploaded files in total
    3. Limit the types of files that can be uploaded to specific ones
  2. Put validation on the back end
    1. All the same stuff as the front end
  3. Implement some type of file re-sizing or compression to save on disk space and bandwidth
  4. Allow sending a custom file name for a file and saving it on the post data in the db
  5. Allow saving other metadata about the file in the db (size, dimensions, mime type, etc)
  6. Implement a system to create backup snapshots then compress and archive them

Implementing the Code

Check if File Is Being Uploaded

Ok let’s put in the check to see if a file was sent along with the request. We can do this pretty easily by calling the request object’s FormFile method which returns a file, a handler, and an error. All we have to do then is check to see if there’s an error and if it’s an instance of http.ErrMissingFile, like this:

file, handler, err := r.FormFile("file")

if err != http.ErrMissingFile {
    defer file.Close() // put in a defer to close the file so that resources are freed once this block is done
    fmt.Printf("DEBUG: File detected, processing upload. Filename: %s\n", handler.Filename)
…
} else {
    fmt.Printf("DEBUG: No file uploaded\n")
}

Ensure Upload Directory Exists

This is admittedly a bit redundant since we’re creating the directory via a Volume in the Docker Compose file, however os.MkdirAll() is an idempotent operation, which basically means it will only have effect once and will not do anything if the directory already exists. Keeping this line ensures that this code will work even if run directly on a machine without Docker.

…
uploadsDir := "/app/uploads"
if err := os.MkdirAll(uploadsDir, 0755); err != nil { 
    fmt.Printf("ERROR: Failed to create directory %s: %v\n", uploadsDir, err)
    http.Error(w, err.Error(), http.StatusInternalServerError)
    return
}
…

Create the File on the Volume

We’ll use the Create() method from os to create the file destination on the Volume. Make sure to put in the Close() call so that the resources required to create the directory are freed up once this block finishes.

file_path = uploadsDir + "/" + handler.Filename
fmt.Printf("DEBUG: Creating file: %s\n", file_path)
dst, err := os.Create(file_path)
if err != nil {
    fmt.Printf("ERROR: Failed to create file %s: %v\n", file_path, err)
    http.Error(w, err.Error(), http.StatusInternalServerError)
    return
}
defer dst.Close()

Copy the Uploaded File to it’s Destination

We’ll use the Copy() method from io to copy the uploaded file contents to the destination file that we created on the Volume.

fmt.Printf("DEBUG: Starting file copy for: %s\n", file_path)
bytesWritten, err := io.Copy(dst, file)
if err != nil {
    fmt.Printf("ERROR: Failed to copy file data: %v\n", err)
    http.Error(w, err.Error(), http.StatusInternalServerError)
    return
}
fmt.Printf("DEBUG: Successfully wrote %d bytes to file: %s\n", bytesWritten, file_path)

Let Go Know We’re Done so it Can Free Up Resources

We’ve actually already done this by adding in the defer calls, just wanted to point that out.

defer file.Close()
…
defer dst.Close()
…

So now, if we put everything together, it would look something like this:

func (app *ItemApp) handleAddItem(w http.ResponseWriter, r *http.Request) {
    if r.Method != http.MethodPost {
        http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
        return
    }

    name := r.FormValue("name")
    content := r.FormValue("content")
    if content == "" {
        http.Error(w, "Content cannot be empty", http.StatusBadRequest)
        return
    }

    file, handler, err := r.FormFile("file")

    var file_path string

    if err != http.ErrMissingFile {
        defer file.Close()

        fmt.Printf("DEBUG: File detected, processing upload. Filename: %s\n", handler.Filename)

        // Ensure uploads directory exists
        uploadsDir := "/app/uploads"
        if err := os.MkdirAll(uploadsDir, 0755); err != nil {
            fmt.Printf("ERROR: Failed to create directory %s: %v\n", uploadsDir, err)
            http.Error(w, err.Error(), http.StatusInternalServerError)
            return
        }

        // Create the destination file
        file_path = uploadsDir + "/" + handler.Filename
        fmt.Printf("DEBUG: Creating file: %s\n", file_path)
        dst, err := os.Create(file_path)
        if err != nil {
            fmt.Printf("ERROR: Failed to create file %s: %v\n", file_path, err)
            http.Error(w, err.Error(), http.StatusInternalServerError)
            return
        }
        defer dst.Close()

        // Copy the uploaded file to the destination
        fmt.Printf("DEBUG: Starting file copy for: %s\n", file_path)
        bytesWritten, err := io.Copy(dst, file)
        if err != nil {
            fmt.Printf("ERROR: Failed to copy file data: %v\n", err)
            http.Error(w, err.Error(), http.StatusInternalServerError)
            return
        }
        fmt.Printf("DEBUG: Successfully wrote %d bytes to file: %s\n", bytesWritten, file_path)
    } else {
        fmt.Printf("DEBUG: No file uploaded\n")
    }

    user_id := 1

    // Create the item with proper FilePath handling
    var filePath sql.NullString
    if file_path != "" {
        file_path_serve := "/uploads/" + handler.Filename
        filePath = sql.NullString{String: file_path_serve, Valid: true}
    } else {
        filePath = sql.NullString{String: "", Valid: false}
    }

    item := Item{
        Name:      name,
        Content:   content,
        FilePath:  filePath,
        UserId:    int32(user_id),
    }

    conn, err := connect()
    if err != nil {
        fmt.Printf("Database connection error %s", err)
        http.Error(w, "Database connection error", http.StatusInternalServerError)
        return
    }
    defer conn.Close(context.Background())
    i := insertItem(conn, item)

    app.templates.ExecuteTemplate(w, "item.html", i)
}

Allow Saving the File Path to Your Database

I won’t go into the details here but basically what you’ll need to do is

  1. Update your Database Schema to add a string field that holds the filename (or an array of them if you want multiple)
  2. Update your ‘Item’ struct to also allow for this field (Side Note, you’ll probably want to use the type sql.NullString instead of string if you want to make the file path optional. Otherwise, Go will freak out if you try to instantiate that struct with a NULL instead of a string. You could probably also coerce the DB to return an empty string instead of NULL but that seems messy to me.

So your Item struct would look something like this:

type Item struct {
    ID          int32          `json:"id"`
    UserId      int32          `json:"user_id"`
    Name        string         `json:"name"`
    Content     string         `json:"content"`
    FilePath    sql.NullString `json:"file_path"`
    CreatedAt   time.Time      `json:"created_at"`
}

Front End Stuff

Upload Form Field

Now of course, on the front end, you’ll need to actually add the file upload field:

(Simple enough)

<input class="file-input" type="file" name="file" />

Create the Handler So You Can Serve the File Back to the UI

You can serve it from whatever path you want, just be sure to pull from where your Volume is:

http.Handle("/uploads/", http.StripPrefix("/uploads/", http.FileServer(http.Dir("/app/uploads"))))

Add the HTML that will render the file:

(Easy Peasy) I’m using HTMX here but you can adapt to whatever front end framework your using (or raw JS if you so please)

<img src={{.FilePath.String}} />

The Volume

Ok last but certainly not least, we need to update our Docker Compose so we create the Volume that we’ll mount to in order to write and read our files. Essentially, all we have to do is define the Volume in the top level ‘volumes` section like this:

volumes:
  uploads:

Then specify the directory location that the Volume will mount to in our service:

services:
  main:

    volumes:
      - uploads:/app/uploads

volumes:
  uploads:

Now You’ve Got Uploads!!!

Like I mentioned earlier, there’s more to do to make it production ready, and you can even implement things like file streaming to make it more efficient for larger files but that’s the basics of it. Now, you can take ownership of your file uploads so your still have functionality when half the internet is down due to some cloud provider’s mistake and also save a little money on those annoying cloud fees!

Share this Post

0
146
0