I am playing around with different tools to prepare dev environment. Docker is nice option. I created the whole dev environment in docker and can build a project in it. The source code for this project lives outside of docker container (on the host). This way you can use your IDE to edit it and use docker just to build it. However, there is one problem a) Docker on OS X uses VM (VirtualBox VM) b) File sharing is reasonably slow (way slower than file IO on host) c) The project has something like a gazzilion files (which exaggerate problems #a an #b).
If I move source code in the docker, I will have the same problem in IDE (it will have to access shared files and it will be slow). I heard about some workaround to make it fast. However, I can't seem to find any information on this subject. Update 1 I used Docker file sharing feature (meaning I run) docker run -P -i -v: -t /bin/bash However, sharing between VM and Docker isn't a problem. The bottle neck is sharing between host and VM.
The workaround I use is not to use boot2docker but instead have a vagrant VM. No such big penalty for mounting folders host-vagrant-docker. On the downside, I have to pre-map folders to vagrant (basically my whole work directory) and pre-expose a range of ports from the vagrant box to the host to have access to the docker services directly from there.
This is unfortunately a typical problem Windows and OS X users are currently struggling with that cannot be solved trivially, especially in the case of Windows users. The main culprit is VirtualBox's vboxfs which is used for file sharing which, despite being incredibly useful, results in poor filesystem I/O. There are numerous situations by which developing the project sources inside the guest VM are brought to a crawl, the main two being scores of 3rd party sources introduce by package managers and Git repositories with a sizable history.
The obvious approach is to move as much of the project-related files outside of vboxfs somewhere else into the guest. For instance, symlinking the package manager directory into the project's vboxfs tree, with something like: mkdir /var/cache/nodemodules && ln -s /var/cache/nodemodules /myproject/nodemodules This alone improved the startup time from 28 seconds down to 4 seconds for a Node.js application with a few dozen dependencies running on my SSD. Unfortunately, this is not applicable to managing Git repositories, short of splatting/truncating your history and committing to data loss, unless the Git repository itself is provisioned within the guest, which forces you to have two repositories: one to clone the environment for inflating the guest and another containing the actual sources, where consolidating the two worlds becomes an absolute pain. The best way to approach the situation is to either:. drop vboxfs in favor of a shared transport mechanism that results in better I/O in the guest, such as the. Unfortunately, for Windows users, the only way to get NFS service support is to run the enterprise edition of Windows (which I believe will still be true for Windows 10).
revert to mounting raw disk partitions into the guest, noting the related risks of giving your hypervisor raw disk access If your developer audience is wholly compromised of Linux and OS X users, option 1 might be viable. Create a Vagrant machine and configure between your host and guest and profit. If you do have Windows users, then, short of buying them an enterprise license, it would be best to simply ask them to repartition their disks and work inside a guest VM. I personally use a Windows host and have a 64 GB partition on my SSD that I mount directly into my Arch Linux guest and operate from there. I also switched to GPT and UEFI and have an option to boot directly into Arch Linux in case I want to circumvent the overhead of the virtualized hardware, giving me the best of both worlds with little compromise. Couple of pieces of info which I found. Structorizer (3.22 3.22-28 dev get for mac download. I started to use Vagrant to work with VirtualBox and install Docker in it.
It gives more flexibility. Default sharing in Vagrant VirtualBox provisioning is very slow. NFS sharing is much faster. However, it could be reasonably slow it (especially if your build process will create files which needs to be written back to this share). Vagrant 1.5+ have a rsync option (to use rsync to copy files from host to VM). It's faster because it doesn't have to write back any changes.
This rsync option has autosync (to continiously sync it). This rsync option consumes a lot of CPU and people came up with a gem to overcome it So, Vagrant + VirtualBox + Rsync shared folder + auto rsync + vagrant gatling looks like a good option for my case (still researching it). I tried vagrant gatling. However, it results in non deterministic behavior. I never know whether new files were copied into VM or not. It wouldn't be a problem if it would take 1 second. However, it make take 20 seconds which is too much (a user can switch a window and start build when new files weren't synced yet).
Now, I am thinking about some way to copy over ONLY files which changed. I am still in research phase.The idea would be to use FSEvent to listen for file changes and send over only changed files. It looks like there are some tools around which do that. Gatling internally using FSEvent. The only problem that it triggers full rsync (which goes and start comparing date/times and sizes for 40k files).
$ docker run -p 8000:80 -d nginx Now, connections to localhost:8000 are sent to port 80 in the container. The syntax for -p is HOSTPORT:CLIENTPORT. HTTP/HTTPS Proxy Support See. Known limitations, use cases, and workarounds Following is a summary of current limitations on the Docker for Mac networking stack, along with some ideas for workarounds.
There is no docker0 bridge on macOS Because of the way networking is implemented in Docker for Mac, you cannot see a docker0 interface on the host. This interface is actually within the virtual machine. I cannot ping my containers Docker for Mac can’t route traffic to containers.
Per-container IP addressing is not possible The docker (Linux) bridge network is not reachable from the macOS host. Use cases and workarounds There are two scenarios that the above limitations affect: I want to connect from a container to a service on the host The host has a changing IP address (or none if you have no network access). From 18.03 onwards our recommendation is to connect to the special DNS name host.docker.internal, which resolves to the internal IP address used by the host.
This is for development purpose and will not work in a production environment outside of Docker for Mac. The gateway is also reachable as gateway.docker.internal. I want to connect to a container from the Mac Port forwarding works for localhost; -publish, -p, or -P all work. Ports exposed from Linux are forwarded to the host. Our current recommendation is to publish a port, or to connect from another container. This is what you need to do even on Linux if the container is on an overlay network, not a bridge network, as these are not routed. The command to run the nginx webserver shown in is an example of this.