Linux Docker For Mac Localhost Equivalenttoylasopa

Posted on  by 

In other words, if the server is running natively on the host machine, the issue is about the same. In still other words, the problem is in connecting to the host machine from within a Docker container. Unfortunately, the client software in B can not use localhost or 127.0. 0.1; that will loop back into the container itself. Note this is different from How to expose a service running inside a docker container, bound to localhost, which can be addressed in multiple ways in Docker for Linux, say through -net host or even -v to bind my Linux-flavor client in etc. My problem is specific for Docker for Mac, so it's not as straightforward. That's why there is a feature in Docker for Mac - special hostname docker.for.mac.localhost with which you can access your host Mac from container. The problem I would like to write a universal sh ell script which will first try to resolve docker.for.mac.localhost and add the resolved address to /etc/hosts.


ThingsBoard cloud


We recommend to use ThingsBoard Cloud - fully managed, scalable and fault-tolerant platform for your IoT applications

ThingsBoard Cloud is for everyone who would like to use ThingsBoard but don’t want to host their own instance of the platform.

  • Troubleshooting

This guide will help you to install and start ThingsBoard using Docker on Linux or Mac OS.

Prerequisites

Running

Depending on the database used there are three type of ThingsBoard single instance docker images:

  • thingsboard/tb-postgres - single instance of ThingsBoard with PostgreSQL database.

    Recommended option for small servers with at least 1GB of RAM and minimum load (few messages per second). 2-4GB is recommended.

  • thingsboard/tb-cassandra - single instance of ThingsBoard with Cassandra database.

    The most performant and recommended option but requires at least 4GB of RAM. 8GB is recommended.

  • thingsboard/tb - single instance of ThingsBoard with embedded HSQLDB database.

    Note: Not recommended for any evaluation or production usage and is used only for development purposes and automatic tests.

In this instruction thingsboard/tb-postgres image will be used. You can choose any other images with different databases (see above).

Choose ThingsBoard queue service

ThingsBoard is able to use various messaging systems/brokers for storing the messages and communication between ThingsBoard services. How to choose the right queue implementation?

  • In Memory queue implementation is built-in and default. It is useful for development(PoC) environments and is not suitable for production deployments or any sort of cluster deployments.

  • Kafka is recommended for production deployments. This queue is used on the most of ThingsBoard production environments now. It is useful for both on-prem and private cloud deployments. It is also useful if you like to stay independent from your cloud provider.However, some providers also have managed services for Kafka. See AWS MSK for example.

  • RabbitMQ is recommended if you don’t have much load and you already have experience with this messaging system.

  • AWS SQS is a fully managed message queuing service from AWS. Useful if you plan to deploy ThingsBoard on AWS.

  • Google Pub/Sub is a fully managed message queuing service from Google. Useful if you plan to deploy ThingsBoard on Google Cloud.

  • Azure Service Bus is a fully managed message queuing service from Azure. Useful if you plan to deploy ThingsBoard on Azure.

  • Confluent Cloud is a fully managed streaming platform based on Kafka. Useful for a cloud agnostic deployments.

See corresponding architecture page and rule engine page for more details.

ThingsBoard includes In Memory Queue service and use it by default without extra settings.

Create docker compose file for ThingsBoard queue service:

Add the following lines to the yml file:

Apache Kafka is an open-source stream-processing software platform.

Create docker compose file for ThingsBoard queue service:

Add the following lines to the yml file.

AWS SQS Configuration

To access AWS SQS service, you first need to create an AWS account.

To work with AWS SQS service you will need to create your next credentials using this instruction:

  • Access key ID
  • Secret access key

Create docker compose file for ThingsBoard queue service:

Add the following lines to the yml file. Don’t forget to replace “YOUR_KEY”, “YOUR_SECRET” with your real AWS SQS IAM user credentials and “YOUR_REGION” with your real AWS SQS account region:

Google Pub/Sub Configuration

To access Pub/Sub service, you first need to create an Google cloud account.

To work with Pub/Sub service you will need to create a project using this instruction.

Create service account credentials with the role “Editor” or “Admin” using this instruction,and save json file with your service account credentials step 9 here.

Create docker compose file for ThingsBoard queue service:

Add the following lines to the yml file. Don’t forget to replace “YOUR_PROJECT_ID”, “YOUR_SERVICE_ACCOUNT” with your real Pub/Sub project id, and service account (it is whole data from json file):

Azure Service Bus Configuration

To access Azure Service Bus, you first need to create an Azure account.

To work with Service Bus service you will need to create a Service Bus Namespace using this instruction.

Create Shared Access Signature using this instruction.

Create docker compose file for ThingsBoard queue service:

Add the following lines to the yml file. Don’t forget to replace “YOUR_NAMESPACE_NAME” with your real Service Bus namespace name, and “YOUR_SAS_KEY_NAME”, “YOUR_SAS_KEY” with your real Service Bus credentials. Note: “YOUR_SAS_KEY_NAME” it is “SAS Policy”, “YOUR_SAS_KEY” it is “SAS Policy Primary Key”:

For installing RabbitMQ use this instruction.

Create docker compose file for ThingsBoard queue service:

Add the following lines to the yml file. Don’t forget to replace “YOUR_USERNAME” and “YOUR_PASSWORD” with your real user credentials, “localhost” and “5672” with your real RabbitMQ host and port:

Confluent Cloud Configuration

To access Confluent Cloud you should first create an account, then create a Kafka cluster and get your API Key.

Create docker compose file for ThingsBoard queue service:

Add the following line to the yml file. Don’t forget to replace “CLUSTER_API_KEY”, “CLUSTER_API_SECRET” and “localhost:9092” with your real Confluent Cloud bootstrap servers:

Where:

  • 8080:9090 - connect local port 8080 to exposed internal HTTP port 9090
  • 1883:1883 - connect local port 1883 to exposed internal MQTT port 1883
  • 5683:5683 - connect local port 5683 to exposed internal COAP port 5683
  • ~/.mytb-data:/data - mounts the host’s dir ~/.mytb-data to ThingsBoard DataBase data directory
  • ~/.mytb-logs:/var/log/thingsboard - mounts the host’s dir ~/.mytb-logs to ThingsBoard logs directory
  • mytb - friendly local name of this machine
  • restart: always - automatically start ThingsBoard in case of system reboot and restart in case of failure.
  • image: thingsboard/tb-postgres - docker image, can be also thingsboard/tb-cassandra or thingsboard/tb

Before starting Docker container run following commands to create a directory for storing data and logs and then change its owner to docker container user,to be able to change user, chown command is used, which requires sudo permissions (command will request password for a sudo access):

NOTE: Replace directory ~/.mytb-data and ~/.mytb-logs with directories you’re planning to use in docker-compose.yml.

Set the terminal in the directory which contains the docker-compose.yml file and execute the following command to up this docker compose directly:

After executing this command you can open http://{your-host-ip}:8080 in your browser (for ex. http://localhost:8080). You should see ThingsBoard login page. Use the following default credentials:

  • System Administrator: [email protected] / sysadmin
  • Tenant Administrator: [email protected] / tenant
  • Customer User: [email protected] / customer

You can always change passwords for each account in account profile page.

Detaching, stop and start commands

You can detach from session terminal with Ctrl-pCtrl-q - the container will keep running in the background.

In case of any issues you can examine service logs for errors.For example to see ThingsBoard node logs execute the following command:

To stop the container:

To start the container:

Upgrading

In order to update to the latest image, execute the following commands:

NOTE: if you use different database change image name in all commands from thingsboard/tb-postgres to thingsboard/tb-cassandra or thingsboard/tb correspondingly.

NOTE: replace host’s directory ~/.mytb-data with directory used during container creation.

NOTE: if you have used one database and want to try another one, then remove the current docker container using docker-compose rm command and use different directory for ~/.mytb-data in docker-compose.yml.

Linux Docker For Mac Localhost Equivalenttoylasopa

Troubleshooting

DNS issues

Note If you observe errors related to DNS issues, for example

Linux Docker For Mac Localhost Equivalenttoylasopa Download

You may configure your system to use Google public DNS servers. See corresponding Linux and Mac OS instructions.

For

Next steps

Docker
  • Getting started guides - These guides provide quick overview of main ThingsBoard features. Designed to be completed in 15-30 minutes.

  • Connect your device - Learn how to connect devices based on your connectivity technology or solution.

  • Data visualization - These guides contain instructions how to configure complex ThingsBoard dashboards.

  • Data processing & actions - Learn how to use ThingsBoard Rule Engine.

  • IoT Data analytics - Learn how to use rule engine to perform basic analytics tasks.

  • Hardware samples - Learn how to connect various hardware platforms to ThingsBoard.

  • Advanced features - Learn about advanced ThingsBoard features.

  • Contribution and Development - Learn about contribution and development in ThingsBoard.


I have Docker container A running a server, and container B running a client.At least for testing, both containers run on the same machine (host).The client software needs to reach out of its own container and then into the server container.The various inter-container connection mechanisms are not usable, because I don’t want to assume that the server is running in a container, although this is the case in the situation described above.

In fact, the “into the server container” part is not a problem, as that is easily taken care of by portmapping, i.e. the option -p of dockerrun. In other words, if the server is running natively on the host machine, the issue is about the same. In still other words,the problem is in connecting to the host machine from within a Docker container.

Unfortunately, the client software in B can not use localhost or 127.0.0.1; that will loop back into the container itself.

After some research, I figured out one solution. It turned out pretty simple. First, give the host machine’s loopback interface an alias IP address (different from 127.0.0.1). The client software in container B can reach the host machine by connecting to this alias IP address directly. Since this IP may be hard to remember, dockerrun has an option for giving it an alias.

Docker.for.mac

Step 1

If the host OS is Mac, do this:

If Linux, do this:

After this, you can check the effect by

which will print

whereas before setting the alias, the printout would be

(To remove the alias, do sudoifconfiglo:0down.)

I don’t know what’s good about 10.254.254.254; it comes from this post. Other addresses are also possible.

You will want these settings to survive system reboot, i.e. want these settings to be run at system startup.To do that, put the following block (with blank lines before and after) in the file /etc/network/interfaces:

Step 2

Use these options in the dockerrun command that launches container B:

Then, within container B, the host machine can be reached by connecting to local_host, local, or 10.254.254.254 directly.

I also tried --add-host=localhost:10.254.254.254, and using localhost from within container B worked well. But there might be caveats, as localhost is, I guess, used by various programs in the system.

Update

I have been using a much simpler alternative for quite a while.

I don’t remember the theory, but what you need to do is type this command *within the Docker container (such as B),

I’m running a Linux Mint 19.03 host machine. This command with the Docker container gave me 172.17.0.1.

Alternatively, the following should print the same result:

If the command ip does not exist in your container,install the Linux package iproute2.

Now in container B, use this IP 172.17.0.1 to reach the host machineand container A. If the service is running on the host machine directly(no Docker involved), the usage is the same.

If the service is running on a remote machine (in container or not), then get the IP address of the remote machine and use it the same way.

Linux Docker For Mac Localhost Equivalenttoylasopa Command

This method does not need anything to be done to the host machine.

Docker Docker.for.mac.localhost

For convenience in Python programs, I also created this function:

Docker For Mac Download

(Originally written July 12, 2017; added the simpler alternative in May 2020.)

Coments are closed