05/10/2022, 8:36 AM
Hey channel, Maybe one of you can help me on this one: The Setup: My cluster is a 2-node bare-metal cluster managed by Rancher Kubernetes Engine, with a 3rd etcd node in AWS and an arbiter node (for quorum decisions in MongoDB and openebs) also in AWS. The 2 real worker nodes are servers in different data centers with excellent (I guess) latency. I tied everything together with Netmaker (0.12.2) through some Ansible Playbooks. Networking works perfectly fine and I set the Netmaker Node (on AWS) as an Ingress Gateway, so our Devs can connect to the cluster as External Clients. This works really well. The Problem: Since I'm now setting up a firewall on my nodes, I only want to have the Netmaker Network ( to have access to everything at any time. I did this with a stupid ufw command (
ufw allow in on nm-rke from
). Everything else except some ssh exceptions and open http(s) is closed. I face a situation where I need to route TCP traffic into my k8s-cluster, so that our devs can access our Postgres Databases. But here `NodePort`s are tearing holes into the firewall because k8s somehow opens the 30thousandISH ports in iptables which cannot (?) be reliable prevented or undone. Since I'm a super lazy guy, I don't want any DB port (or any other port than 443) to be open in to the Internet (Security) and I also don't want my devs to mess around with k8s port forwardings (convinience). So I thought I setup
to overcome this issue, but here I somehow got stuck. I successfully setup MetalLB and introduced a simple L2 config
apiVersion: v1 
kind: ConfigMap 
  namespace: metallb-system 
  name: config 
  config: | 
    - name: default 
      protocol: layer2 
tl;dr: I can access LBs from all Nodes but not from the external clients. They run into a timeout when trying to access. Would appreciate any help (message limit reached)!