Skip to content

Conversation

@dcbw
Copy link
Member

@dcbw dcbw commented Apr 30, 2019

Cache the config JSON, CNI_ARGs, and CapabilityArgs on ADD operations,
and remove them on DEL. This allows runtimes to retrieve the cached
information to ensure that the pod is given the same config and options
on DEL as it was given on ADD.

@containernetworking/cni-maintainers @squeed

Cache the config JSON, CNI_ARGs, and CapabilityArgs on ADD operations,
and remove them on DEL. This allows runtimes to retrieve the cached
information to ensure that the pod is given the same config and options
on DEL as it was given on ADD.
Copy link
Contributor

@bboreham bboreham left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couple of nits to be going on with

}

// GetNetworkListCachedConfig returns the cached Config of the previous
// previous AddNetworkList() operation for a network list, or an error.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

previous previous?

bash -c "umask 0; PATH=$PATH go test $@"
}
if [ ! -z "${COVERALLS:-""}" ]; then
go get golang.org/x/tools/cmd/cover
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unrelated change?

return result, nil
}

type cachedConfig struct {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For future compatability, this should be something of the form

{
  "kind": "CniCacheV1",
  "value": {"cniVersion": "..."  }
}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@squeed Good point.

@squeed
Copy link
Member

squeed commented May 2, 2019

Hang on, should we really be caching the config json? I like caching the ephemeral parameters, but not the config. What if you want to update some portion of the configuration file?

@dcbw
Copy link
Member Author

dcbw commented May 2, 2019

Hang on, should we really be caching the config json? I like caching the ephemeral parameters, but not the config. What if you want to update some portion of the configuration file?

@squeed caching config is the whole point. Do you really want to DEL a pod with different config than you ADD-ed it? I struggle to think of a case where that's reasonable.

@squeed
Copy link
Member

squeed commented May 2, 2019

@squeed caching config is the whole point. Do you really want to DEL a pod with different config than you ADD-ed it? I struggle to think of a case where that's reasonable.

Funny, I would say: Definitely! Sometimes. Well, maybe sometimes. I can think of supporting cases for both.

If you change, say, the firewall chain, then you'd definitely want to delete using the old configuration. OTOH, if you change a database URL, then you'd want to use the new configuration. To me, caching the configuration for delete would be surprising.

@dcbw
Copy link
Member Author

dcbw commented May 2, 2019

Funny, I would say: Definitely! Sometimes. Well, maybe sometimes. I can think of supporting cases for both.

@squeed OK I can buy changing a database URL or something, perhaps you're using a complex plugin and something there changed. But changing many other attributes would have a detrimental effect on the container. The problem is how do we know which ones?

At the very least, you shouldn't be able to change out the "name" and "type" keys between ADD + DEL (how this works with a ConfigList I have no idea). I'd argue you probably shouldn't change out the host-local ranges too, but that's pretty plugin-specific. You could argue that it should be possible to add/remove some non-core plugins (tuning, maybe firewall, etc) but honestly I'm not sure how we could determine that.

Another thought experiment; if your config list included "portmap" and was passed portMappings on ADD, it would be quite odd not to call portmap and pass those same mappings on DEL...

Or, would it be OK to change the 'bridge' plugin's bridge name between ADD/DEL? Or "ipMasq"? Doing either of these things would either fail or incompletely clean things up.

@mars1024
Copy link
Member

mars1024 commented May 2, 2019

I think maybe we can just cache the network name which pod used in libcni/ocicni, and let cni config file and cni plugins to handle the consistency between ADD and DEL?

@mars1024
Copy link
Member

mars1024 commented May 2, 2019

I think we can obey the rule that the new config can always DEL old pods which is set up by the old config with same name completely when we want to change cni configs,based on this,we can only cache the network name which pod used and trust the newest config with this name.

@dcbw
Copy link
Member Author

dcbw commented May 29, 2019

So @squeed and I talked about it for a while, and decided that yes there are use-cases for changing parts of a given network config underneath the container, but not for switching the network config out entirely. So much like @mars1024 is saying, a delete operation should probably only check network name (and perhaps plugin type) from the config its sending and make sure those match.

I think the config caching that this PR does is probably OK, since all it does is allow the runtime (dockershim, crio, etc) to retrieve that config. Then the runtime can make the decision about whether to use an existing updated configuration that it already knows about, or if that's no longer valid (eg old config deleted entirely) it could use the cached config.

Does that sound OK to everyone? eg libcni only does the storage/retrieval, while the runtime itself can make policy decisions about how to handle config changes.

@dcbw
Copy link
Member Author

dcbw commented Jun 5, 2019

Continued in #678

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants