Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Idempotency for istio-iptables apply flow #50328

Open
wants to merge 37 commits into
base: master
Choose a base branch
from

Conversation

leosarra
Copy link
Contributor

@leosarra leosarra commented Apr 9, 2024

Please provide a description of this PR:
Fixes #30393 and #42792 .

Current behavior on master will make a re-execution of istio-init container likely to fail due to the possibility of having existing ISTIO_* chains (and/or rules).
If, for whatever reason, the init containers are getting re-executed, we should make sure that they do not fail because the settings are already in effect. This is also one of the recommendation of the k8s doc about init containers:

Because init containers can be restarted, retried, or re-executed, init container code should be idempotent. In particular, code that writes to files on EmptyDirs should be prepared for the possibility that an output file already exists.
-- https://kubernetes.io/docs/concepts/workloads/pods/init-containers/

With these changes pilot-agent ip-tables will remove rules that would end up being duplicated + existing istio chains.

To help us figure out who should review this PR, please put an X in all the areas that this PR affects.

  • Ambient
  • Configuration Infrastructure
  • Docs
  • Dual Stack
  • Installation
  • Networking
  • Performance and Scalability
  • Policies and Telemetry
  • Security
  • Test and Release
  • User Experience
  • Developer Infrastructure

Please check any characteristics that apply to this pull request.

  • Does not have any user-facing changes. This may include CLI changes, API changes, behavior changes, performance improvements, etc.

@istio-testing istio-testing added the do-not-merge/work-in-progress Block merging of a PR because it isn't ready yet. label Apr 9, 2024
@istio-testing
Copy link
Collaborator

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@istio-testing istio-testing added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Apr 9, 2024
@@ -136,6 +136,8 @@ func bindCmdlineFlags(cfg *config.Config, cmd *cobra.Command) {
&cfg.NetworkNamespace)

flag.BindEnv(fs, constants.CNIMode, "", "Whether to run as CNI plugin.", &cfg.CNIMode)

flag.BindEnv(fs, constants.PreemptiveCleanup, "", "Whether to run a preemptive cleanup to prevent duplicated rules and chains", &cfg.PreemptiveCleanup)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Asking, because I haven't formed an opinion yet - does this need to be optional? Why would you not want to always just do this as a pre-step?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was not sure if it should be performed all the time. I put it as opt-in for now in order to not impact the VM usecase (usecase of which I don't know much about).
I'm investigating if there could be an impact there by doing a preemptive cleanup which deletes the ISTIO_* chains and every rule in them.

Anyway, I'm still doing some manual tests and I need to add proper coverage for this cleanup.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer not to make this a flag, and simply do it every time, which should be fine if it is idempotent.

If we did have to make it an option, the option should be an opt-out, not an opt-in.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The option is now enabled by default and cannot be enabled, I don't think the cleanup will affect existing usecases and it is anyway being executed in "quiet-mode" with errors being ignored.
Later on, probably not in this commit, I plan to add a --cleanup-only option which could come in handy for VM workloads.

@istio-testing istio-testing added the needs-rebase Indicates a PR needs to be rebased before being merged label Apr 13, 2024
@istio-testing istio-testing removed the needs-rebase Indicates a PR needs to be rebased before being merged label Apr 15, 2024
releasenotes/notes/50328.yaml Outdated Show resolved Hide resolved
@@ -136,6 +136,8 @@ func bindCmdlineFlags(cfg *config.Config, cmd *cobra.Command) {
&cfg.NetworkNamespace)

flag.BindEnv(fs, constants.CNIMode, "", "Whether to run as CNI plugin.", &cfg.CNIMode)

flag.BindEnv(fs, constants.PreemptiveCleanup, "", "Whether to run a preemptive cleanup to prevent duplicated rules and chains", &cfg.PreemptiveCleanup)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer not to make this a flag, and simply do it every time, which should be fine if it is idempotent.

If we did have to make it an option, the option should be an opt-out, not an opt-in.

tools/istio-iptables/pkg/capture/run_test.go Outdated Show resolved Hide resolved
@istio-testing istio-testing added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Apr 18, 2024
@leosarra
Copy link
Contributor Author

leosarra commented Apr 18, 2024

I think after the last fix the logic should be fine now.
I'm working on the integration tests to simulate a istio-init rerun

@bleggett
Copy link
Contributor

Looking good, TY!

Copy link
Member

@howardjohn howardjohn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

doesn't delete+apply cause downtime if we are to actually apply this somewhere with rules already? Thinking of VMs, etc?

Or in the pod case as well actually. This seems dangerous

@bleggett
Copy link
Contributor

bleggett commented Apr 23, 2024

doesn't delete+apply cause downtime if we are to actually apply this somewhere with rules already? Thinking of VMs, etc?

Or in the pod case as well actually. This seems dangerous

Well, in the case he's trying to fix, traffic disruption already is unavoidable and it shouldn't be unsafe:

Current behavior on master will make a re-execution of istio-init container likely to fail due to the possibility of having existing ISTIO_* chains (and/or rules).

any mutation of iptables rules should be construed as causing downtime, I don't think there's a way to do any iptables CRUD with no-downtime guarantees. Whether we fail during cleanup or fail during create, a failed iptables state means we don't know what the heck is happening to traffic, basically, and is always dangerous.

If we fail in either of those spots before user workloads are started, we can at least fail-closed.

istio-iptables in general needs some kind of cleanup logic as a shared component - whether it's safe to do cleanup in all situations is probably a case-by-case thing, for sure - for instance, we already have different IptablesConfigurator settings between ambient and sidecar, so if we are concerned we can just make preemptiveCleanup a flag there (do it for init, possibly not for repair or ambient yet)

@howardjohn
Copy link
Member

Well, in the case he's trying to fix, traffic disruption already is unavoidable and it shouldn't be unsafe:

In this case "Because init containers can be restarted, retried, or re-executed, init container code should be idempotent. In particular, code that writes to files on EmptyDirs should be prepared for the possibility that an output file already exists."

traffic is 100% fine. Kubernetes/the container runtime is being silly and re-running the init container for no reason, but the application containers are perfectly fine. TBH I don't know how it is acceptable for k8s to do this, but 🤷

@bleggett
Copy link
Contributor

In this case "Because init containers can be restarted, retried, or re-executed, init container code should be idempotent. In particular, code that writes to files on EmptyDirs should be prepared for the possibility that an output file already exists."

traffic is 100% fine. Kubernetes/the container runtime is being silly and re-running the init container for no reason, but the application containers are perfectly fine. TBH I don't know how it is acceptable for k8s to do this, but 🤷

Maybe we should just stop running iptables in init containers then. I don't think it is possible to run iptables and maintain both idempotency and traffic disruption guarantees.

@leosarra
Copy link
Contributor Author

leosarra commented Apr 24, 2024

Oooh, I assumed that init containers would not be re-executed while the application is running but I was wrong, it does indeed look like it can happen...

Yes, the iptables operation are not atomic so the delete will lead to a shortly downtime to the iptables coverage.
On the other end, as Ben mentioned, the rerunning of the iptables command on master is not working at all due to failures in case of pre-existing chains/rules. This will lead to permanent traffic loss unless the pod is recreated.

I think we can find a middle-ground to reduce the impact on existing rules in case of reruns.
Instead of doing a cleanup I could a step that check for existing chains/rules via iptables -L (chains) and iptables -C (rules), if an existing chain/rules is found then the single iptables -N/-A will be skipped.
This should prevent a downtime of the istio-iptables rules in case they are re-applied while the application is running.

Keep in mind that checking chain/rules existance is not race-condition safe so running multiple istio-iptables cmd in parallel can lead to the usual shenanigans. In case of race condition:

  • best case: chain are assumed to not exists while in reality they are -> error raised
  • worst_case: chain are assumed to not exists while in reality they are -> rule are created twice.

I assume that is fine since we don't expect parallel executions of istio-iptables.

Regarding the current cleanup logic of PR this could go behind a flag --reverse flag which, while it will not be used for providing idempotency, it could still provide a simple way to reverse changes performed via istio-iptables.

What do you think :) ?

@bleggett
Copy link
Contributor

bleggett commented Apr 24, 2024

One thing to keep in mind - the iptables library we have is used in 3 spots. 1 of which is run as a CLI binary in a sidecar context. The other 2 spots use it as a straight library, and may or may not use it in a sidecar context, as the README mentions: https://github.com/istio/istio/tree/master/tools/istio-iptables/pkg#readme

Whatever we do here has to work equally well for all 3 spots - it cannot be just for init containers or just for invocations of the library's CLI wrapper.

I think we can find a middle-ground to reduce the impact on existing rules in case of reruns. Instead of doing a cleanup I could a step that check for existing chains/rules via iptables -L (chains) and iptables -C (rules), if an existing chain/rules is found then the single iptables -N/-A will be skipped. This should prevent a downtime of the istio-iptables rules in case they are re-applied while the application is running.

This is still not an upgrade-safe reversal, outside of the init container scenario it would fail if we changed rules between releases (maybe a security hole was found in a set of rules in a prev. release, etc).

If you do it per-rule, we need 2 modes

  • error out during cleanup if existing rule does not exactly match the expected rule, even if it exists.
  • warn and remove during cleanup if existing rule does not exactly match the expected rule.

otherwise skip if the rule is already there and is exactly what the current code expects.

mode 1 should be strictly safe in all contexts that need it, assuming all the rules are checked for correctness before they are removed.

Keep in mind that checking chain/rules existance is not race-condition safe so running multiple istio-iptables cmd in parallel can lead to the usual shenanigans. In case of race condition:

Yep, we should be fine there. If we skip existing rules, that also means we cannot use the restore method of all-or-nothing rules application anymore, which means we will have to safely handle partial applications that fail midway through - something the restore model nicely elides for us currently.

We can have idempotency, no traffic disruptions, or safety - pick two, and kube has already picked the first one for us :D

@bleggett
Copy link
Contributor

bleggett commented Apr 24, 2024

Another thing we could do is say that there are no traffic disruption guarantees for init containers, because as per kube init containers can be restarted.

And that if people are concerned about that, they should use istio CNI and the cni plugin instead of the init container.

I think that's very reasonable.

@howardjohn
Copy link
Member

Another thing we could do is say that there are no traffic disruption guarantees for init containers, because as per kube init containers can be restarted.

And that if people are concerned about that, they should use istio CNI and the cni plugin instead of the init container.

I think that's very reasonable.

👎

It is 100% unacceptable for us to take a working, running sidecar and suddenly stop capturing traffic. That is a critical CVE.

Especially when the motivation for it is mostly cosmetic (the init container restarting issue just results in it crashlooping, which has no effect on the app).

Also I don't really see why we cannot pick all 3. You can update iptables rules without "delete+add". Also, we probably don't even need to update them at all, just see they are already what we want and bail out without doing anything?

@bleggett
Copy link
Contributor

bleggett commented Apr 24, 2024

👎

It is 100% unacceptable for us to take a working, running sidecar and suddenly stop capturing traffic. That is a critical CVE.

Yes, it is also not what I'm suggesting.

I'm suggesting traffic disruption, not traffic escape.

That's why I'm making a distinction between safety and traffic disruption guarantees - the former is escape, and not acceptable. The latter may be unavoidable if you use init containers, and that is not an unreasonable thing to say to users, especially since we support alternatives.

Especially when the motivation for it is mostly cosmetic (the init container restarting issue just results in it crashlooping, which has no effect on the app).

That's true here, but does it change the fact that we will still need a cleanup mechanism, either way, that works in all contexts, of which init containers are one?

We should at least document that we ignore the K8S spec for our init containers, then, and tell people to ignore the crashlooping Istio containers.

Also I don't really see why we cannot pick all 3. You can update iptables rules without "delete+add". Also, we probably don't even need to update them at all, just see they are already what we want and bail out without doing anything?

Yes, if they are all correct, we can do nothing. The complications enter if they are not all correct (because they got mangled, or are outdated, or otherwise not in a known-good state) - what do we do?

And if we go rule by rule to update, we are again excluding the possibility of using restore style transactions, which brings other risks (which is not to say we can't manage that, just that every approach has tradeoffs, so we must pick which of the three we care about most and are least willing to compromise on - which is safety. The debate is over the ordering of the remaining two)

@istio-testing istio-testing added the needs-rebase Indicates a PR needs to be rebased before being merged label May 3, 2024
@istio-testing istio-testing removed the needs-rebase Indicates a PR needs to be rebased before being merged label May 7, 2024
@leosarra
Copy link
Contributor Author

leosarra commented May 7, 2024

Hey all,
Thank you for all the feedback, I did some rework based on it :)
In the latest changes, the cleanup and application of iptables rules occur only if residue from previous executions are detected and if those residue are incompatible with the intended iptables configuration.
In case the cleanup is needed, some temporary safeguards are now added to stop traffic while the cleanup-apply step is being performed.
This would allow to have no traffic loss in case of reruns when the initial state was already correct, on the other cases a minor downtime of traffic is expected (which is anyway more desirable than the current behavior of master).

Here is a photo of the overall flow:
image

There is one important limitation to mention about the code that verifies if the current state is "compatible" with the desired iptables.
The implemented code verifies exact rule/chain matches but it does not compare the order of the rules within chains.

In my initial testing, I tried to perform checks on ordering by directly comparing the overview captured by iptables-save with the intended rules. This proved to be unreliable as the internal representation of iptables rules can be different from the original iptables that are applied.
For example flags can slightly change name after the rules are appended, representation of marks can change (integer to hexadecimal), new flags may be added by default, some flags may be removed as redundant.
This makes a comparison quite hard to pull off, and it may get influenced also by the kernel version.

The current approach still relies on iptables-save to get the current state to verify which chains/rules are defined but par of the verification is done via iptables -C which does not check rule ordering. Not sure if I should consider this a deal-breaker for the whole verification logic.

The testing is mostly in place, I need to add some "unexpected rules" to verify that those are also properly cleanup as expected.

@howardjohn
Copy link
Member

Nice, the new check sounds like it could be a lot more reliable.

One thing I worry about is the init container case in Kubernetes. Its important to consider in this case there is nothing wrong with their pod today. All traffic works fine.

I worry we will have a false-positive, rewrite the rules, and cause a small outage for these users; all of this for a change that wasn't actually fixing real traffic issues.

Is there some way we can only do this outside of k8s or something?

Also, kube-proxy has some updating of iptables, they may have some good patterns for this

Copy link

linux-foundation-easycla bot commented May 17, 2024

CLA Signed

The committers listed above are authorized under a signed CLA.

@leosarra leosarra force-pushed the init-idempotency branch 3 times, most recently from 4cd7bb8 to c2dd908 Compare May 17, 2024 13:10
Copy link
Contributor

@bleggett bleggett left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

few comments, looking better, ty!

} else if residueFound {
log.Info("Found residue of old iptables rules/chains, cleanup is needed")
if deltaExists && !reconcile {
log.Info("Found compatible iptables rules/chains, no reconciliation needed")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This log line doesn't seem to be correct anymore, should be "Found residue, reconciliation disabled"?

Copy link
Contributor Author

@leosarra leosarra May 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My bad here, the naming was misleading. The reconcile variable was not supposed to be mentioned in VerifyIptablesState().
Code has been adjusted to mention residueExists and deltaExists.

Fast-fail based on reconcile enabled/disabled is here:

if residueExists && deltaExists && cfg.cfg.NoReconcile {
return fmt.Errorf("reconcile is needed but no-reconcile flag is set. Can't recover from this state")
}

func (cfg *IptablesConfigurator) VerifyIptablesState(iptVer, ipt6Ver *dep.IptablesVersion) (bool, bool) {
// These variables track the status of iptables installation
deltaExists := false // Flag to indicate if a difference is found between expected and current state
reconcile := false // Flag to indicate if a reconciliation via cleanup is needed
Copy link
Contributor

@bleggett bleggett May 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does this function set/return reconcile? I don't think this function needs to return two bools anyway, it can just report deltaFound true|false to the caller, which can decide what to do based on the config flag.

Copy link
Contributor Author

@leosarra leosarra May 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes, you are right. I unintentionally used the deltaFound as the old leftoverFound.
I will change it to convey the proper intention and simplify the function return value

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On second thought, I believe the function should return both residueExists and deltaExists.
Returning only deltaExists wouldn't allow us to distinguish between the first execution and reruns that might require cleanup. This would lead to guardrails+cleanup for the first run, which is not desirable.

"reconcile" on the other end, I agree that should not belong to this function.

Copy link
Member

@howardjohn howardjohn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel extremely uncomfortable with this. It seems like we are trying to solve a problem that has been solved in numerous projects in the space already, but doing it in a way they all have decided is broken and moved away from?

I don't think I can approve this unless we either move to a proven-safe (via prior art in another project) way to idempotently update iptables rules, or have a very compelling reason why our needs are different than other projects.

@@ -170,6 +177,125 @@ func (rb *IptablesRuleBuilder) buildRules(rules []*Rule) [][]string {
return output
}

func reverseRules(rules []*Rule) []*Rule {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add comments? not great to have esoteric parsing logic without explaining what it does

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 I also asked for more explicit/behaviorally-descriptive unit tests around these mutative funcs, it should be easy to fence these very tightly as there are no OS deps.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added comments to the reverse and check logic, same for the higher-level functions verifying the state of iptables.
Fine-grained tests for reverse and check were also added in iptables_builder_test.go

return output
}

func (rb *IptablesRuleBuilder) buildGuardrails() []*Rule {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't get this guardrails concept. isn't the point of iptables-save and restore to allow doing transactions? Why do we need to do delete+add when we can just reconcile the state in one go under a lock?

Do other implementations do this?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The guardrails I believe are in place because iptables-save output is not relied upon, and we have to go rule-by-rule with iptables -C as per #50328 (comment)

So, we have to DIY transactions, basically.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In theory it would be possible to use iptables-restore to do cleanup+apply with one operation although it cannot be considered as single atomic transaction in every circumstance.
iptables-restore provides for sure provides a certain level of atomicity but the atomicity only applies at a "table-level", while iptables-restore will be the sum of multiple atomic transactions it cannot be considered as single atomic transaction in its entirety. See xtables-restore L96: https://git.netfilter.org/iptables/tree/iptables/xtables-restore.c#n96

Given that sometimes we change more than just the nat table I did not feel safe that using iptables-restore would provide a no-traffic-escape guarantee that would allowed me to get rid of the safety guardrails.

On a side-note, using iptables-restore has some extra challenges as it requires the cleanup to be way more chirurgical in the removal as we must be sure that what we remove exists at that time, otherwise the whole table-transaction would fail. This is not a blocker though, if needed I could make it work.

// Execute iptables-restore
if err := cfg.executeIptablesRestoreCommand(iptVer, true); err != nil {
return err
func (cfg *IptablesConfigurator) getStateFromSave(data string) map[string]map[string][]string {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is safe.

There is a rich history in Kubernetes of "parsing iptables-save is broken". kubernetes/kubernetes#112477 and many links from there are good history.

Calico is in a similar boat: https://github.com/projectcalico/calico/blob/23ae58b62765b14aa2c5952b2fc6c40155731a79/felix/iptables/table.go#L141

Copy link
Contributor

@bleggett bleggett May 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree we can't rely on that, I believe that is why @leosarra moved to iptables- C instead in #50328 (comment) which should address the potentially wobblyness, but I will let them reply.

Copy link
Contributor Author

@leosarra leosarra May 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, what Calico comment said is correct.
Parsing the rules provided by iptables-save is unsafe because the rules, as represented in the kernel, are different from how they were originally applied (although both representation are obviously equivalent).
In some scenarios the changes can be easily accounted for (e.g: ordering of flags) while in other cases are really hard to take care of (e.g: flags being removed/added/changed, values being represented differently decimal->hexadecimal). For this reason I rely on iptables-save only for the following things: obtaining *{table}, :{chain}, counting the number of rules.

I use that information to decide whatever a delta exists using the following logic:

  • every istio chain in the expected state must exists also in the current state
  • every istio chain must have the same number of elements in both current and expected state
  • every rule in expected state (may it be in istio or non-istio chains) must exists in the current state, the verification is carried over by using "iptables -C" on the rule produced by our iptables builder. No comparison of the parsed rules is performed.

If all 3 conditions are satisfied then the delta is not present and no new tables/chains have to be applied.

Copy link
Contributor Author

@leosarra leosarra May 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've taken a look at how kubeproxy, Linkerd, and Kuma fares with respect to the reconciliation of iptables and iptables-save usage.

Overall it looks like our use of the iptables-save output appears to be similar to that of kubeproxy, with the exception that we also count the number of rules in a chain.
Linkerd is also behaving similarly to the PR although there is no check if compatible iptables are found before performing a cleanup. I don't think there is the need for any guardrails in Linkerd as all the rules resides in one table and it can be updated atomically with one transaction of iptables-restore, this keeps the logic quite simple.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I use that information to decide whatever a delta exists using the following logic:

every istio chain in the expected state must exists also in the current state
every istio chain must have the same number of elements in both current and expected state
every rule in expected state (may it be in istio or non-istio chains) must exists in the current state, the verification is carried over by using "iptables -C" on the rule produced by our iptables builder. No comparison of the parsed rules is performed.

If all 3 conditions are satisfied then the delta is not present and no new tables/chains have to be applied.

Can you put this blurb in the istio-iptables README? This logic specifically is important to articulate.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you put this blurb in the istio-iptables README? This logic specifically is important to articulate.

Done

// VerifyIptablesState function verifies the current iptables state against the expected state.
// The current state is considered equal to the expected state if the following three conditions are met:
// - Every ISTIO_* chain in the expected state must also exist in the current state.
// - Every ISTIO_* chain must have the same number of elements in both the current and expected state.
// - Every rule in the expected state (whether it is in an ISTIO or non-ISTIO chain) must also exist in the current state.
// The verification is performed by using "iptables -C" on the rule produced by our iptables builder. No comparison of the parsed rules is done.
//
// Note: The order of the rules is not checked and is not used to determine the equivalence of the two states.
// The function returns two boolean values, the first one indicates whether residues exist,
// and the second one indicates whether differences were found between the current and expected state.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
release-notes-none Indicates a PR that does not require release notes. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

panic in istio-init
4 participants