EN: Spring Boot - File Upload, Download - via Minio or AWS S3 SDK

EN: Spring Boot  - File Upload, Download - via Minio or AWS S3 SDK

Goal:

  1. Create a Web Application in Spring Boot which can upload, download,delete file in file storage
  2. Show that using AWS Java SDK for S3 you can simply write implementation on a free distributed file storage alike S3 which call MinIO
  3. Also provide MinioCLient implementation if you do not need AWS S3 client at all.

So, first of all we are going to create Spring Boot project, through spring initialiser at https://start.spring.io/

We will use a minimal set components - the main is Spring Web, because we would like to provide Rest API.

Major role in our sample will play operations on files upload process, download, retrieve list of files in a bucket or deletion of any file.

Buckets theory
An Amazon S3 bucket is a public cloud storage resource available in Amazon Web Services' (AWS) Simple Storage Service (S3), an object storage offering. Amazon S3 buckets, which are similar to file folders, store objects, which consist of data and its descriptive metadata.

Buckets in a boundary of AWS S3 I'll call the main directory regarding which we saves our files - for instance, we can divide those buckets on videos, photos, documents, - also defining some meta data information, that stores with a types of files, but it's just a concrete solution and more like decision of architecture.

To work with file storage via MinIO client or Amazon S3 Java SDK we will need to set some extra library dependencies in a  pom.xml file:

        <!-- ... -->
        <!-- minio client -->
        <dependency>
            <groupId>io.minio</groupId>
            <artifactId>minio</artifactId>
            <version>7.0.2</version>
        </dependency>
        <!-- aws java sdk client -->
        <dependency>
            <groupId>com.amazonaws</groupId>
            <artifactId>aws-java-sdk</artifactId>
            <version>${aws.sdk.version}</version>
        </dependency>
        <!-- ioutils будем использовать из этой библиотеки для простоты         -->
		<dependency>
            <groupId>commons-fileupload</groupId>
            <artifactId>commons-fileupload</artifactId>
            <version>1.3.1</version>
        </dependency>
        <dependency>
            <groupId>org.apache.commons</groupId>
            <artifactId>commons-io</artifactId>
            <version>1.3.2</version>
        </dependency>
        ...

So to work with a MinIO storage we need to set following parameters  - it's an ACCESS KEY and SECRET KEY, recommended to set those keys in environment variables if you working with a local installation of minIO server, for our developing purposes we set those parameters in the configuration file application.properties

s3.url=https://play.min.io
s3.accessKey=Q3AM3UQ867SPQQA43P2F
s3.secretKey=zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG

spring.servlet.multipart.max-file-size=100MB
spring.servlet.multipart.max-request-size=100MB

As you can see there are two more parameters, which helpful to set in advance the case if your files are by size is bigger than it established by default, so here we sets 100Mb as a max size of possible uploading file, by default is set 10Mb

Aaand ... question, what about following parameter, why do we need it??? s3.url=https://play.min.io - it's a right question. For simplicity of our example, we are using open and free minio storage, which provided by MinIO team for testing purposes their APIs. It's so awesome that we even not need to install storage server locally for proof-of-concept, isn't it a beauty?!😂

Well, what else - creating own configuration file, where we sets our clients AmazonS3 and MinioClient - as beans for work with storage. In our example lets implement both, but lets say that controller we are going to use is AmazonS3 👇👇👇

package com.timurisachenko.microstorage.configs;

import com.amazonaws.ClientConfiguration;
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import io.minio.MinioClient;
import io.minio.errors.InvalidEndpointException;
import io.minio.errors.InvalidPortException;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class AppConfig {

    @Value("${s3.url}")
    private String s3Url;
    @Value("${s3.accessKey}")
    private String accessKey;
    @Value("${s3.secretKey}")
    private String secretKey;

    @Bean
    public MinioClient s3Client() throws InvalidPortException, InvalidEndpointException {
        return new MinioClient(s3Url, accessKey, secretKey);
    }

    @Bean
    public AmazonS3 s3() {
        AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey);
        ClientConfiguration clientConfiguration = new ClientConfiguration();
        clientConfiguration.setSignerOverride("AWSS3V4SignerType");

        AmazonS3 s3Client = AmazonS3ClientBuilder
                .standard()
                .withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(s3Url, Regions.US_EAST_1.name()))
                .withPathStyleAccessEnabled(true)
                .withClientConfiguration(clientConfiguration)
                .withCredentials(new AWSStaticCredentialsProvider(credentials))
                .build();
        return s3Client;
    }

}

Great! We have completed client configuration, no we can go to typing our endpoint-s in a controller class StorageController
```java

package com.timurisachenko.microstorage.controllers;

import com.timurisachenko.microstorage.services.AmazonS3Service;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.core.io.ByteArrayResource;
import org.springframework.http.MediaType;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.*;
import org.springframework.web.multipart.MultipartFile;

import java.util.HashMap;
import java.util.List;
import java.util.Map;

@RestController
@RequestMapping("/s3")
public class StorageController {
    private AmazonS3Service s3Service;

    @Autowired
    public StorageController(@Qualifier("amazonS3ServiceImpl") AmazonS3Service s3Service) {
        this.s3Service = s3Service;
    }

    @PostMapping(value = "/{bucketName}/files", consumes = {MediaType.MULTIPART_FORM_DATA_VALUE})
    public Map<String, String> upload(@PathVariable("bucketName") String bucketName, @RequestPart(value = "file") MultipartFile files) throws Exception {
        s3Service.uploadFile(bucketName, files.getOriginalFilename(), files.getBytes());
        Map<String, String> result = new HashMap<>();
        result.put("key", files.getOriginalFilename());
        return result;
    }

    @GetMapping(value = "/{bucketName}/{keyName}", consumes = "application/octet-stream")
    public ResponseEntity<ByteArrayResource> downloadFile(@PathVariable("bucketName") String bucketName, @PathVariable("keyName") String keyName) throws Exception {
        byte[] data = s3Service.downloadFile(bucketName, keyName);
        ByteArrayResource resource = new ByteArrayResource(data);

        return ResponseEntity
                .ok()
                .contentLength(data.length)
                .header("Content-type", "application/octet-stream")
                .header("Content-disposition", "attachment; filename=\"" + keyName + "\"")
                .body(resource);
    }

    @DeleteMapping("/{bucketName}/files/{keyName}")
    public void delete(@PathVariable("bucketName") String bucketName, @PathVariable(value = "keyName") String keyName) throws Exception {
        s3Service.deleteFile(bucketName, keyName);
    }

    @GetMapping("/{bucketName}/files")
    public List<String> listObjects(@PathVariable("bucketName") String bucketName) throws Exception {
        return s3Service.listFiles(bucketName);
    }
}

This is how controller looks like, AmazonS3Service - interface that define set of operations commons for both clients. The key details of controller is that uploading files on server must provides in a POST request with the format MediaType.MULTIPART_FORM_DATA_VALUE, endpoint for uploading files is following /s3/{bucketName}/files  - the type of arguments for method will be MultipartFile file

While downloading file - type of response returns it in a format "application/octet-stream" - return object in array of bytes with header disposition

Come back to our controller, in which you can notice interface AmazonS3Service -  it has set of common methods for both clients settled in our configuration class AppConfig.

package com.timurisachenko.microstorage.services;

import java.io.File;
import java.util.List;

public interface AmazonS3Service {
    void uploadFile(String bucketName, String originalFilename, byte[] bytes) throws Exception;

    byte[] downloadFile(String bucketName, String fileUrl) throws Exception;

    void deleteFile(String bucketName, String fileUrl) throws Exception;

    List<String> listFiles(String bucketName) throws Exception;

    File upload(String bucketName, String name, byte[] content) throws Exception;

    byte[] getFile(String bucketName, String key) throws Exception;
}

Next is a implementation AmazonS3ServiceImpl class

package com.timurisachenko.microstorage.services.impl;

import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.model.S3Object;
import com.amazonaws.services.s3.model.S3ObjectInputStream;
import com.timurisachenko.microstorage.services.AmazonS3Service;
import org.apache.commons.io.IOUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

import java.io.*;
import java.util.LinkedList;
import java.util.List;

@Service
public class AmazonS3ServiceImpl implements AmazonS3Service {
    private final AmazonS3 s3;

    @Autowired
    public AmazonS3ServiceImpl(AmazonS3 s3) {
        this.s3 = s3;
    }

    @Override
    public void uploadFile(String bucketName, String originalFilename, byte[] bytes) throws Exception {
        File file = upload(bucketName, originalFilename, bytes);
        s3.putObject(bucketName, originalFilename, file);

    }

    @Override
    public byte[] downloadFile(String bucketName, String fileUrl) throws Exception {
        return getFile(bucketName, fileUrl);
    }

    @Override
    public void deleteFile(String bucketName, String fileUrl) throws Exception {
        s3.deleteObject(bucketName, fileUrl);
    }

    @Override
    public List<String> listFiles(String bucketName) throws Exception {
        List<String> list = new LinkedList<>();
        s3.listObjects(bucketName).getObjectSummaries().forEach(itemResult -> {
            list.add(itemResult.getKey());
            System.out.println(itemResult.getKey());
        });
        return list;
    }

    @Override
    public File upload(String bucketName, String name, byte[] content) throws Exception{
        File file = new File("/tmp/" + name);
        file.canWrite();
        file.canRead();
        FileOutputStream iofs = null;
        iofs = new FileOutputStream(file);
        iofs.write(content);
        return file;
    }

    @Override
    public byte[] getFile(String bucketName, String key) throws Exception {
        S3Object obj = s3.getObject(bucketName, key);
        S3ObjectInputStream stream = obj.getObjectContent();
        try {
            byte[] content = IOUtils.toByteArray(stream);
            obj.close();
            return content;
        } catch (IOException e) {
            e.printStackTrace();
        }
        return null;
    }
}

That's all! Thanks for your attention!
If you find this material useful, please, comment me, or type a message in telegram by @timur_isachenko, if you do have any ideas how to improve materials or which cases will be useful to disassemble, I'm looking forward hearing that from you!
Thanks for reaching to this line!
🙏
Link to github sources 👈

Previous post of short MinIO introduction
https://timurisachenko.com/minio/