[Error] Errors on MDM installation
Description
Just tried to install CODECO using the Deploy the entire CODECO framework inside a Docker container
(here).
PDLC could not reach the MDM API (it could since one day before - so communication format, etc. is not an issue)
On the installation scripts I am getting the following errors:
Logs from Installation
.....................Installing MDM.....................................
namespace/he-codeco-mdm created
"bitnami" has been added to your repositories
"neo4j" has been added to your repositories
NAME: mdm-zookeeper
LAST DEPLOYED: Wed Nov 20 22:52:04 2024
NAMESPACE: he-codeco-mdm
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: zookeeper
CHART VERSION: 13.6.0
APP VERSION: 3.9.3
** Please be patient while the chart is being deployed **
ZooKeeper can be accessed via port 2181 on the following DNS name from within your cluster:
mdm-zookeeper.he-codeco-mdm.svc.cluster.local
To connect to your ZooKeeper server run the following commands:
export POD_NAME=$(kubectl get pods --namespace he-codeco-mdm -l "app.kubernetes.io/name=zookeeper,app.kubernetes.io/instance=mdm-zookeeper,app.kubernetes.io/component=zookeeper" -o jsonpath="{.items[0].metadata.name}")
kubectl exec -it $POD_NAME -- zkCli.sh
To connect to your ZooKeeper server from outside the cluster execute the following commands:
kubectl port-forward --namespace he-codeco-mdm svc/mdm-zookeeper 2181:2181 &
zkCli.sh 127.0.0.1:2181
WARNING: There are "resources" sections in the chart not set. Using "resourcesPreset" is not recommended for production. For production installations, please set the following values according to your workload needs:
- resources
- tls.resources
+info https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
NAME: mdm-kafka
LAST DEPLOYED: Wed Nov 20 22:52:06 2024
NAMESPACE: he-codeco-mdm
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: kafka
CHART VERSION: 21.1.1
APP VERSION: 3.4.0
** Please be patient while the chart is being deployed **
Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:
mdm-kafka.he-codeco-mdm.svc.cluster.local
Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:
mdm-kafka-0.mdm-kafka-headless.he-codeco-mdm.svc.cluster.local:9092
You need to configure your Kafka client to access using SASL authentication. To do so, you need to create the 'kafka_jaas.conf' and 'client.properties' configuration files with the content below:
- kafka_jaas.conf:
KafkaClient {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="connector"
password="$(kubectl get secret mdm-kafka-jaas --namespace he-codeco-mdm -o jsonpath='{.data.client-passwords}' | base64 -d | cut -d , -f 1)";
};
- client.properties:
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-256
ssl.truststore.type=PEM
ssl.truststore.certificates=-----BEGIN CERTIFICATE----- \
... \
-----END CERTIFICATE-----
To create a pod that you can use as a Kafka client run the following commands:
kubectl run mdm-kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.4.0-debian-11-r6 --namespace he-codeco-mdm --command -- sleep infinity
kubectl cp --namespace he-codeco-mdm /path/to/client.properties mdm-kafka-client:/tmp/client.properties
kubectl cp --namespace he-codeco-mdm /path/to/kafka_jaas.conf mdm-kafka-client:/tmp/kafka_jaas.conf
kubectl exec --tty -i mdm-kafka-client --namespace he-codeco-mdm -- bash
export KAFKA_OPTS="-Djava.security.auth.login.config=/tmp/kafka_jaas.conf"
PRODUCER:
kafka-console-producer.sh \
--producer.config /tmp/client.properties \
--broker-list mdm-kafka-0.mdm-kafka-headless.he-codeco-mdm.svc.cluster.local:9092 \
--topic test
CONSUMER:
kafka-console-consumer.sh \
--consumer.config /tmp/client.properties \
--bootstrap-server mdm-kafka.he-codeco-mdm.svc.cluster.local:9092 \
--topic test \
--from-beginning
NAME: mdm-neo4j
LAST DEPLOYED: Wed Nov 20 22:53:42 2024
NAMESPACE: he-codeco-mdm
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing neo4j-standalone.
Your release "mdm-neo4j" has been installed in namespace "he-codeco-mdm".
The neo4j user's password has been set to "xxxxxxx".To view the progress of the rollout try:
$ kubectl --namespace "he-codeco-mdm" rollout status --watch --timeout=600s statefulset/mdm-neo4j
Once rollout is complete you can log in to Neo4j at "neo4j://mdm-neo4j.he-codeco-mdm.svc.cluster.local:7687". Try:
$ kubectl run --rm -it --namespace "he-codeco-mdm" --image "neo4j:4.4.35" cypher-shell \
-- cypher-shell -a "neo4j://mdm-neo4j.he-codeco-mdm.svc.cluster.local:7687" -u neo4j -p "xxxxxxx"
Graphs are everywhere!
WARNING: Passwords set using 'neo4j.password' will be stored in plain text in the Helm release ConfigMap.
Please consider using 'neo4j.passwordFromSecret' for improved security.
Created topic json-events.
NAME: mdm-controller
LAST DEPLOYED: Wed Nov 20 22:53:43 2024
NAMESPACE: he-codeco-mdm
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
# SPDX-License-Identifier: Apache-2.0
# Copyright IBM Corp 2023
MDM Controller
=======
This component loads the graph database with the events from the connectors stored in Kafka.
mdm.kafkaName: The name of the Kafka cluster Helm release (default mdm-kafka)
mdm.neo4jName: The name of the Neo4j Graph database Helm release (default mdm-neo4)
Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: error validating "": error validating data: apiVersion not set
Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: error validating "": error validating data: apiVersion not set
Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: error validating "": error validating data: apiVersion not set
NAME: freshness-connector
LAST DEPLOYED: Wed Nov 20 22:53:44 2024
NAMESPACE: he-codeco-mdm
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
# SPDX-License-Identifier: Apache-2.0
# Copyright IBM Corp 2023
MDM PROMETHEUS connector
........................................Finished installing MDM...............................................
The error from PDLC on trying to reach MDM API was the following:
ConnectionError(MaxRetryError('HTTPConnectionPool(host=\'mdm-api.he-codeco-mdm.svc.cluster.local\', port=8090): Max retries exceeded with url: /pdlc?cluster=kind&namespace=he-codeco-acm&pod=acm-swm-app-backend (Caused by NameResolutionError("<urllib3.connection.HTTPConnection object at 0x7d6bc269aa80>: Failed to resolve \'mdm-api.he-codeco-mdm.svc.cluster.local\' ([Errno -2] Name or service not known)"))')), type(err)=<class 'requests.exceptions.ConnectionError'>
Following are the logs of the pods:
freshness-connector
root@kind-control-plane:/# kubectl logs freshness-connector-check-connection-6c8b6f5f55-knkr6 -n he-codeco-mdm
{'CAcerts': [], 'crawler': {'k8s': {'api_url': 'https://kubernetes.default.svc'}, 'prometheus': {'entity': 'Freshness_state', 'metric': 'CODECO_freshness', 'server': {'port': 9090, 'url': 'http://prometheus-k8s.monitoring.svc.cluster.local'}}, 'schedule': 300}, 'cronSchedule': '{{ cat (untilStep (mod (randNumeric 3 | atoi) (.Values.frequency | int) | int) 59 (.Values.frequency | int) | join ",") "* * * *" }}', 'frequency': 5, 'image': {'name': 'mdm-connector-prometheus', 'pullPolicy': 'Always', 'repository': 'hecodeco'}, 'k8sauth': {'enabled': False}, 'oidc': {'authServerUrl': 'https://local.oidc.server', 'clientId': 'pf-connector', 'clientSecret': 'xxxxxxxxxxxxxxx', 'enabled': False}, 'pathfinder': {'connector': {'id': 'prometheus-connector-kind-kind', 'state': {'access-key': 'XXX', 'bucket-name': 'pathfinder-test-cos-connector-status', 'secret-key': 'YYY', 'service-endpoint': 's3.us-south.cloud-object-storage.appdomain.cloud', 'signing-region': 'us-south', 'type': 'local'}}, 'kubernetesUrl': 'https://kubernetes.default.svc', 'url': 'http://mdm-api.he-codeco-mdm:8090/mdm/api/v1'}, 'podAnnotations': {}, 'podSecurityContext': {}, 'resources': {}, 'securityContext': {}, 'serviceAccount': {'annotations': {}, 'automount': True, 'create': True, 'name': ''}, 'stopMode': 'stop', 'suspended': False}
{'CAcerts': [], 'crawler': {'k8s': {'api_url': 'https://kubernetes.default.svc'}, 'prometheus': {'entity': 'Freshness_state', 'metric': 'CODECO_freshness', 'server': {'port': 9090, 'url': 'http://prometheus-k8s.monitoring.svc.cluster.local'}}, 'schedule': 300}, 'cronSchedule': '{{ cat (untilStep (mod (randNumeric 3 | atoi) (.Values.frequency | int) | int) 59 (.Values.frequency | int) | join ",") "* * * *" }}', 'frequency': 5, 'image': {'name': 'mdm-connector-prometheus', 'pullPolicy': 'Always', 'repository': 'hecodeco'}, 'k8sauth': {'enabled': False}, 'oidc': {'authServerUrl': 'https://local.oidc.server', 'clientId': 'pf-connector', 'clientSecret': 'xxxxxxxxxxxxxxx', 'enabled': False}, 'pathfinder': {'connector': {'id': 'prometheus-connector-kind-kind', 'state': {'access-key': 'XXX', 'bucket-name': 'pathfinder-test-cos-connector-status', 'secret-key': 'YYY', 'service-endpoint': 's3.us-south.cloud-object-storage.appdomain.cloud', 'signing-region': 'us-south', 'type': 'local'}}, 'kubernetesUrl': 'https://kubernetes.default.svc', 'url': 'http://mdm-api.he-codeco-mdm:8090/mdm/api/v1'}, 'podAnnotations': {}, 'podSecurityContext': {}, 'resources': {}, 'securityContext': {}, 'serviceAccount': {'annotations': {}, 'automount': True, 'create': True, 'name': ''}, 'stopMode': 'stop', 'suspended': False}
2024-11-20:22:55:02,951 INFO [eventpublisher.py:114] Load last connector state from json file
2024-11-20:22:55:02,951 INFO [eventpublisher.py:185] Pf-model-registry url: http://mdm-api.he-codeco-mdm:8090/mdm/api/v1
loop
2024-11-20:22:55:02,952 DEBUG [connectionpool.py:243] Starting new HTTP connection (1): prometheus-k8s.monitoring.svc.cluster.local:9090
2024-11-20:22:55:02,977 DEBUG [connectionpool.py:546] http://prometheus-k8s.monitoring.svc.cluster.local:9090 "GET /api/v1/query?query=CODECO_freshness%7Bkubernetes_pod_name%21%3D%27%27%7D HTTP/11" 200 93
loop
2024-11-20:23:00:02,978 DEBUG [connectionpool.py:243] Starting new HTTP connection (1): prometheus-k8s.monitoring.svc.cluster.local:9090
2024-11-20:23:00:02,982 DEBUG [connectionpool.py:546] http://prometheus-k8s.monitoring.svc.cluster.local:9090 "GET /api/v1/query?query=CODECO_freshness%7Bkubernetes_pod_name%21%3D%27%27%7D HTTP/11" 200 93
loop
2024-11-20:23:05:02,983 DEBUG [connectionpool.py:243] Starting new HTTP connection (1): prometheus-k8s.monitoring.svc.cluster.local:9090
2024-11-20:23:05:03,004 DEBUG [connectionpool.py:546] http://prometheus-k8s.monitoring.svc.cluster.local:9090 "GET /api/v1/query?query=CODECO_freshness%7Bkubernetes_pod_name%21%3D%27%27%7D HTTP/11" 200 93
loop
2024-11-20:23:10:03,005 DEBUG [connectionpool.py:243] Starting new HTTP connection (1): prometheus-k8s.monitoring.svc.cluster.local:9090
2024-11-20:23:10:03,008 DEBUG [connectionpool.py:546] http://prometheus-k8s.monitoring.svc.cluster.local:9090 "GET /api/v1/query?query=CODECO_freshness%7Bkubernetes_pod_name%21%3D%27%27%7D HTTP/11" 200 93
loop
2024-11-20:23:15:03,010 DEBUG [connectionpool.py:243] Starting new HTTP connection (1): prometheus-k8s.monitoring.svc.cluster.local:9090
2024-11-20:23:15:03,014 DEBUG [connectionpool.py:546] http://prometheus-k8s.monitoring.svc.cluster.local:9090 "GET /api/v1/query?query=CODECO_freshness%7Bkubernetes_pod_name%21%3D%27%27%7D HTTP/11" 200 93
loop
2024-11-20:23:20:03,016 DEBUG [connectionpool.py:243] Starting new HTTP connection (1): prometheus-k8s.monitoring.svc.cluster.local:9090
2024-11-20:23:20:03,020 DEBUG [connectionpool.py:546] http://prometheus-k8s.monitoring.svc.cluster.local:9090 "GET /api/v1/query?query=CODECO_freshness%7Bkubernetes_pod_name%21%3D%27%27%7D HTTP/11" 200 93
mdm-controller
root@kind-control-plane:/# kubectl logs mdm-controller-0 -n he-codeco-mdm
2024-11-20:22:54:47,812 INFO [ctrl.py:25] Kafka direct mode
2024-11-20:22:54:47,812 INFO [ctrl.py:26] Bootstrap_servers: mdm-kafka-headless:9092
2024-11-20:22:54:47,812 INFO [ctrl.py:27] Kafka user: connector
2024-11-20:22:54:47,812 INFO [ctrl.py:28] Kafka topic json-events
2024-11-20:22:54:47,814 INFO [conn.py:380] <BrokerConnection node_id=bootstrap-0 host=mdm-kafka-headless:9092 <connecting> [IPv4 ('10.244.2.13', 9092)]>: connecting to mdm-kafka-headless:9092 [('10.244.2.13', 9092) IPv4]
2024-11-20:22:54:47,814 INFO [conn.py:1205] Probing node bootstrap-0 broker version
2024-11-20:22:54:47,878 INFO [conn.py:706] <BrokerConnection node_id=bootstrap-0 host=mdm-kafka-headless:9092 <authenticating> [IPv4 ('10.244.2.13', 9092)]>: Authenticated as connector via SCRAM-SHA-512
2024-11-20:22:54:47,878 INFO [conn.py:445] <BrokerConnection node_id=bootstrap-0 host=mdm-kafka-headless:9092 <authenticating> [IPv4 ('10.244.2.13', 9092)]>: Connection complete.
2024-11-20:22:54:47,984 INFO [conn.py:1267] Broker version identified as 2.5.0
2024-11-20:22:54:47,984 INFO [conn.py:1268] Set configuration api_version=(2, 5, 0) to skip auto check_version requests on startup
2024-11-20:22:54:47,985 WARNING [ctrl.py:46] json-events topic does not exists yet
{0}
2024-11-20:22:54:57,986 INFO [conn.py:380] <BrokerConnection node_id=0 host=mdm-kafka-0.mdm-kafka-headless.he-codeco-mdm.svc.cluster.local:9092 <connecting> [IPv4 ('10.244.2.13', 9092)]>: connecting to mdm-kafka-0.mdm-kafka-headless.he-codeco-mdm.svc.cluster.local:9092 [('10.244.2.13', 9092) IPv4]
2024-11-20:22:54:58,191 INFO [conn.py:706] <BrokerConnection node_id=0 host=mdm-kafka-0.mdm-kafka-headless.he-codeco-mdm.svc.cluster.local:9092 <authenticating> [IPv4 ('10.244.2.13', 9092)]>: Authenticated as connector via SCRAM-SHA-512
2024-11-20:22:54:58,191 INFO [conn.py:445] <BrokerConnection node_id=0 host=mdm-kafka-0.mdm-kafka-headless.he-codeco-mdm.svc.cluster.local:9092 <authenticating> [IPv4 ('10.244.2.13', 9092)]>: Connection complete.
2024-11-20:22:54:58,191 INFO [conn.py:919] <BrokerConnection node_id=bootstrap-0 host=mdm-kafka-headless:9092 <connected> [IPv4 ('10.244.2.13', 9092)]>: Closing connection.
2024-11-20:22:55:02,615 INFO [cluster.py:371] Group coordinator for ctrl01 is BrokerMetadata(nodeId='coordinator-0', host='mdm-kafka-0.mdm-kafka-headless.he-codeco-mdm.svc.cluster.local', port=9092, rack=None)
2024-11-20:22:55:02,615 INFO [base.py:693] Discovered coordinator coordinator-0 for group ctrl01
2024-11-20:22:55:02,615 INFO [conn.py:380] <BrokerConnection node_id=coordinator-0 host=mdm-kafka-0.mdm-kafka-headless.he-codeco-mdm.svc.cluster.local:9092 <connecting> [IPv4 ('10.244.2.13', 9092)]>: connecting to mdm-kafka-0.mdm-kafka-headless.he-codeco-mdm.svc.cluster.local:9092 [('10.244.2.13', 9092) IPv4]
2024-11-20:22:55:02,820 INFO [conn.py:706] <BrokerConnection node_id=coordinator-0 host=mdm-kafka-0.mdm-kafka-headless.he-codeco-mdm.svc.cluster.local:9092 <authenticating> [IPv4 ('10.244.2.13', 9092)]>: Authenticated as connector via SCRAM-SHA-512
2024-11-20:22:55:02,820 INFO [conn.py:445] <BrokerConnection node_id=coordinator-0 host=mdm-kafka-0.mdm-kafka-headless.he-codeco-mdm.svc.cluster.local:9092 <authenticating> [IPv4 ('10.244.2.13', 9092)]>: Connection complete.
2024-11-20:23:04:02,923 INFO [client_async.py:952] Closing idle connection coordinator-0, last active 540000 ms ago
2024-11-20:23:04:02,924 INFO [conn.py:919] <BrokerConnection node_id=coordinator-0 host=mdm-kafka-0.mdm-kafka-headless.he-codeco-mdm.svc.cluster.local:9092 <connected> [IPv4 ('10.244.2.13', 9092)]>: Closing connection.
mdm-kafka (The last ones)
[2024-11-20 22:58:32,331] INFO [Controller id=0] Processing automatic preferred replica leader election (kafka.controller.KafkaController)
[2024-11-20 22:58:32,331] TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController)
[2024-11-20 22:58:32,333] DEBUG [Controller id=0] Topics not in preferred replica for broker 0 Map() (kafka.controller.KafkaController)
[2024-11-20 22:58:32,333] TRACE [Controller id=0] Leader imbalance ratio for broker 0 is 0.0 (kafka.controller.KafkaController)
[2024-11-20 23:03:32,334] INFO [Controller id=0] Processing automatic preferred replica leader election (kafka.controller.KafkaController)
[2024-11-20 23:03:32,334] TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController)
[2024-11-20 23:03:32,334] DEBUG [Controller id=0] Topics not in preferred replica for broker 0 Map() (kafka.controller.KafkaController)
[2024-11-20 23:03:32,334] TRACE [Controller id=0] Leader imbalance ratio for broker 0 is 0.0 (kafka.controller.KafkaController)
[2024-11-20 23:08:32,335] INFO [Controller id=0] Processing automatic preferred replica leader election (kafka.controller.KafkaController)
[2024-11-20 23:08:32,335] TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController)
[2024-11-20 23:08:32,335] DEBUG [Controller id=0] Topics not in preferred replica for broker 0 Map() (kafka.controller.KafkaController)
[2024-11-20 23:08:32,335] TRACE [Controller id=0] Leader imbalance ratio for broker 0 is 0.0 (kafka.controller.KafkaController)
[2024-11-20 23:13:32,335] INFO [Controller id=0] Processing automatic preferred replica leader election (kafka.controller.KafkaController)
[2024-11-20 23:13:32,336] TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController)
[2024-11-20 23:13:32,336] DEBUG [Controller id=0] Topics not in preferred replica for broker 0 Map() (kafka.controller.KafkaController)
[2024-11-20 23:13:32,336] TRACE [Controller id=0] Leader imbalance ratio for broker 0 is 0.0 (kafka.controller.KafkaController)
[2024-11-20 23:18:32,336] INFO [Controller id=0] Processing automatic preferred replica leader election (kafka.controller.KafkaController)
[2024-11-20 23:18:32,336] TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController)
[2024-11-20 23:18:32,337] DEBUG [Controller id=0] Topics not in preferred replica for broker 0 Map() (kafka.controller.KafkaController)
[2024-11-20 23:18:32,337] TRACE [Controller id=0] Leader imbalance ratio for broker 0 is 0.0 (kafka.controller.KafkaController)
[2024-11-20 23:23:32,337] INFO [Controller id=0] Processing automatic preferred replica leader election (kafka.controller.KafkaController)
[2024-11-20 23:23:32,337] TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController)
[2024-11-20 23:23:32,338] DEBUG [Controller id=0] Topics not in preferred replica for broker 0 Map() (kafka.controller.KafkaController)
[2024-11-20 23:23:32,338] TRACE [Controller id=0] Leader imbalance ratio for broker 0 is 0.0 (kafka.controller.KafkaController)
mdm-neo4j
root@kind-control-plane:/# kubectl logs mdm-neo4j-0 -n he-codeco-mdm
Changed password for user 'neo4j'. IMPORTANT: this change will only take effect if performed before the database is started for the first time.
Fetching versions.json for Plugin 'apoc' from https://neo4j-contrib.github.io/neo4j-apoc-procedures/versions.json
Installing Plugin 'apoc' from https://github.com/neo4j-contrib/neo4j-apoc-procedures/releases/download/4.4.0.28/apoc-4.4.0.28-all.jar to /var/lib/neo4j/plugins/apoc.jar
Applying default values for plugin apoc to neo4j.conf
SLF4J(W): No SLF4J providers were found.
SLF4J(W): Defaulting to no-operation (NOP) logger implementation
SLF4J(W): See https://www.slf4j.org/codes.html#noProviders for further details.
SLF4J(W): Class path contains SLF4J bindings targeting slf4j-api versions 1.7.x or earlier.
SLF4J(W): Ignoring binding found at [jar:file:/var/lib/neo4j/lib/slf4j-nop-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J(W): See https://www.slf4j.org/codes.html#ignoredBindings for an explanation.
mdm-zookeeper
root@kind-control-plane:/# kubectl logs mdm-zookeeper-0 -n he-codeco-mdm
zookeeper 22:53:07.89 INFO ==>
zookeeper 22:53:07.89 INFO ==> Welcome to the Bitnami zookeeper container
zookeeper 22:53:07.89 INFO ==> Subscribe to project updates by watching https://github.com/bitnami/containers
zookeeper 22:53:07.89 INFO ==> Submit issues and feature requests at https://github.com/bitnami/containers/issues
zookeeper 22:53:07.89 INFO ==> Upgrade to Tanzu Application Catalog for production environments to access custom-configured and pre-packaged software components. Gain enhanced features, including Software Bill of Materials (SBOM), CVE scan result reports, and VEX documents. To learn more, visit https://bitnami.com/enterprise
zookeeper 22:53:07.89 INFO ==>
zookeeper 22:53:07.90 INFO ==> ** Starting ZooKeeper setup **
zookeeper 22:53:07.91 WARN ==> You have set the environment variable ALLOW_ANONYMOUS_LOGIN=yes. For safety reasons, do not use this flag in a production environment.
zookeeper 22:53:07.97 INFO ==> Initializing ZooKeeper...
zookeeper 22:53:07.97 INFO ==> No injected configuration file found, creating default config files...
zookeeper 22:53:07.99 INFO ==> No additional servers were specified. ZooKeeper will run in standalone mode...
zookeeper 22:53:08.00 INFO ==> Deploying ZooKeeper from scratch...
zookeeper 22:53:08.00 INFO ==> ** ZooKeeper setup finished! **
zookeeper 22:53:08.01 INFO ==> ** Starting ZooKeeper **
/opt/bitnami/java/bin/java
ZooKeeper JMX enabled by default
Using config: /opt/bitnami/zookeeper/bin/../conf/zoo.cfg