Windows and Linux not only do they not share a file system, they also do not share a permission guidelines management system.
Linux has its Unix-derived permissions system, at the time AT&T started designing it, disk space was extremely expensive, so the system was designed to save every bit of the disk. Each file will have only 3 sets of permissions: access to all, the owner’s group and the owner. To have more complex permissions guidelines it is necessary to use a different methodology called ACL that is not supported by Linux natively, depending on the installation of new packages. Another problem is that ACL is not implemented in the sharing that Virtualbox provides.
In windows, the permissions have been taken to the extreme. Each file can have specific permissions for any number of different users, groups, and for All. In addition "All" may have different meanings: "All users authenticated"; "All visitors"; "All users of the system"; "All network users". To make matters worse, permissions can have inheritance behaviors between folders and children files, among other possibilities.
When a service (like Virtualbox) shares Windows folders on Linux it works towards making this gap of differences less distant.
As we have seen, because of the nature of the different systems, the modifications made are not mirrored in an identical way in the shared folder in Windows. When we access a share on Linux -- or are using Windows credentials shared between Linux users -- or we will have an access error to the share if that user does not have permission.
Docker, on the other hand, takes advantage of Linux kernel innovations: Namespaces, cgroups, Selinux and Unionfs to be able to share complete systems (containers) for different tasks with the intention of creating isolation between systems. In each container there are several users and groups that may not even exist in the host system!
By running a container held in a Virtualbox share you will be exposed to all these issues, which will make it impossible to use it in its entirety.
I suggest you create a script that runs from time to time (every 10 minutes for example) by compressing and sending the files to windows, in case you really need to do this.
script sh.:
#!/bin/bash
tar -cpzf /compartilhamento/arquivo.tar.gz pasta_do_conteiner
where -c creates tar -p keeps permissions -z compact -f points output file
crontab -and:
*/10 * * * * /caminho/para/o/script.sh