blob: 80d7d4637e10028f2d371e7181ede38a419e4276 [file] [log] [blame]
[{"Input":{"Module":"github.com/docker/distribution","Description":"### Impact\n\nSystems that rely on digest equivalence for image attestations may be vulnerable to type confusion.\n\n### Patches\n\nUpgrade to at least `v2.8.0-beta.1` if you are running `v2.x` release. If you use the code from the `main` branch, update at least to the commit after [b59a6f827947f9e0e67df0cfb571046de4733586](https://github.com/distribution/distribution/commit/b59a6f827947f9e0e67df0cfb571046de4733586).\n\n### Workarounds\n\nThere is no way to work around this issue without patching.\n\n### References\n\nDue to [an oversight in the OCI Image Specification](https://github.com/opencontainers/image-spec/pull/411) that removed the embedded `mediaType` field from manifests, a maliciously crafted OCI Container Image can cause registry clients to parse the same image in two different ways without modifying the image’s digest by modifying the `Content-Type` header returned by a registry. This can invalidate a common pattern of relying on container image digests for equivalence.\n\n### For more information\n\nIf you have any questions or comments about this advisory:\n* Open an issue in [distribution](https://github.com/distribution/distribution) \n* Open an issue in [distribution-spec](https://github.com/opencontainers/distribution-spec) \n* Email us at [cncf-distribution-security@lists.cncf.io](mailto:cncf-distribution-security@lists.cncf.io)\n"},"Suggestion":{"Summary":"Type confusion in github.com/docker/distribution","Description":"Systems that rely on digest equivalence for image attestations may be vulnerable to type confusion. A maliciously crafted OCI Container Image can cause registry clients to parse the same image in two different ways without modifying the image's digest, invalidating the common pattern of relying on container image digests for equivalence. This problem has been addressed in newer versions by improving validation in manifest unmarshalling."}},{"Input":{"Module":"github.com/pomerium/pomerium","Description":"### Impact\nChanges to the OIDC claims of a user after initial login are not reflected in policy evaluation when using [`allowed_idp_claims`](https://www.pomerium.com/reference/#allowed-idp-claims) as part of policy. If using `allowed_idp_claims` and a user's claims are changed, Pomerium can make incorrect authorization decisions.\n\n### Patches\nv0.15.6\n\n### Workarounds\n- Clear data on `databroker` service by clearing redis or restarting the in-memory databroker to force claims to be updated\n\n### References\nhttps://github.com/pomerium/pomerium/pull/2724\n\n### For more information\nIf you have any questions or comments about this advisory:\n* Open an issue in [Pomerium](https://github.com/pomerium/pomerium)\n* Email us at [security@pomerium.com](mailto:security@pomerium.com)\n"},"Suggestion":{"Summary":"Incorrect authorization in github.com/pomerium/pomerium","Description":"Pomerium is an open source identity-aware access proxy. Changes to the OIDC claims of a user after initial login are not reflected in policy evaluation when using allowed_idp_claims as part of policy. If using allowed_idp_claims and a user's claims are changed, Pomerium can make incorrect authorization decisions. For users unable to upgrade clear data on databroker service by clearing redis or restarting the in-memory databroker to force claims to be updated."}},{"Input":{"Module":"github.com/blevesearch/bleve","Description":"### Impact\n_What kind of vulnerability is it? Who is impacted?_\n\nBleve includes HTTP utilities under bleve/http package, that are used by its sample application. \n(https://github.com/blevesearch/bleve-explorer)\nThese HTTP methods paves way for exploitation of a node’s filesystem where the bleve index resides, \nif the user has used bleve’s own HTTP (bleve/http) handlers for exposing the access to the indexes. \nFor instance, the CreateIndexHandler (http/index_create.go) and DeleteIndexHandler (http/index_delete.go) \nenable an attacker to create a bleve index (directory structure) anywhere where the user running the server \nhas the write permissions and to delete recursively any directory owned by the same user account.\n \nUsers who have used the bleve/http package for exposing access to bleve index without the explicit \nhandling for the Role Based Access Controls(RBAC) of the index assets would be impacted.\n\n\n### Patches\n_Has the problem been patched? What versions should users upgrade to?_\n\n**No**. The http package is purely intended to be used for demonstration purposes. \nAnd bleve is never designed to be handling the RBACs or it was ever advertised to be used in that way. \nHence the collaborators of this project have decided to stay away from adding any authentication or \nauthorization to bleve project at the moment.\n\n### Workarounds\n_Is there a way for users to fix or remediate the vulnerability without upgrading?_\n\nThe bleve/http package is mainly for demonstration purposes and it lacks exhaustive validation of the user \ninputs as well as any authentication and authorization measures. \nSo it is recommended to not use that in production use cases.\n\n### For more information\nIf you have any questions or comments about this advisory:\n* Open an issue [here](https://github.com/blevesearch/bleve/issues).\n* Email us at [mailto:security@couchbase.com, fts-team@couchbase.com].\n"},"Suggestion":{"Summary":"No access control in github.com/blevesearch/bleve and bleve/v2","Description":"HTTP handlers provide unauthenticated access to the local filesystem. The Bleve http package is intended for demonstration purposes and contains no authentication, authorization, or validation of user inputs. Exposing handlers from this package can permit attackers to create files and delete directories."}},{"Input":{"Module":"github.com/cloudflare/circl","Description":"### Impact\nWhen sampling randomness for a shared secret, the implementation of Kyber and FrodoKEM, did not check whether `crypto/rand.Read()` returns an error. In rare deployment cases (error thrown by the `Read()` function), this could lead to a predictable shared secret.\n\nThe tkn20 and blindrsa components did not check whether enough randomness was returned from the user provided randomness source. Typically the user provides `crypto/rand.Reader`, which in the vast majority of cases will always return the right number random bytes. In the cases where it does not, or the user provides a source that does not, the blinding for blindrsa is weak and integrity of the plaintext is not ensured in tkn20.\n\n\n### Patches\nThe fix was introduced in CIRCL v. 1.3.3\n"},"Suggestion":{"Summary":"Leaked shared secret and weak blinding in github.com/cloudflare/circl","Description":"When sampling randomness for a shared secret, the implementation of Kyber and FrodoKEM, did not check whether crypto/rand.Read() returns an error. In rare deployment cases (error thrown by the Read() function), this could lead to a predictable shared secret. The tkn20 and blindrsa components did not check whether enough randomness was returned from the user provided randomness source. Typically the user provides crypto/rand.Reader, which in the vast majority of cases will always return the right number random bytes. In the cases where it does not, or the user provides a source that does not, the blinding for blindrsa is weak and integrity of the plaintext is not ensured in tkn20."}},{"Input":{"Module":"github.com/libp2p/go-libp2p","Description":"### Impact\nVersions older than `v0.18.0` of go-libp2p are vulnerable to targeted resource exhaustion attacks. These attacks target libp2p’s connection, stream, peer, and memory management. An attacker can cause the allocation of large amounts of memory, ultimately leading to the process getting killed by the host’s operating system. While a connection manager tasked with keeping the number of connections within manageable limits has been part of go-libp2p, this component was designed to handle the regular churn of peers, not a targeted resource exhaustion attack.\n\nIn the original version of the attack, the malicious node would continue opening new streams on a stream multiplexer that doesn’t provide sufficient back pressure (yamux or mplex). It is easy to defend against this one attack, but there are countless variations of this attack:\n* Opening streams and causing a non-trivial memory allocation (e.g., for multistream or protobuf parsing)\n* Creating a lot of sybil nodes and opening new connections across nodes\n\n### Patches (What to do as a go-libp2p consumer:)\n1. Update your go-libp2p dependency to go-libp2p v0.18.0 or greater (current version as of publish date is [v0.24.0](https://github.com/libp2p/go-libp2p/releases/tag/v0.24.0).)\n - Note: **It's recommend that you update to `v0.21.0` onwards** as you’ll get some useful functionality that will help in production environments like better metrics around resource usage, Grafana dashboards around resource usage, allow list support, and default autoscaling limits. [Please see the v0.21.0 release notes for more info.](https://github.com/libp2p/go-libp2p/releases/tag/v0.21.0))\n\n2. Determine appropriate limits for your application - go-libp2p sets up a resource manager with the default limits if none are provided. For default definitions please see [limits_defaults.go](https://github.com/libp2p/go-libp2p/blob/master/p2p/host/resource-manager/limit_defaults.go). These limits are also set to automatically scale, this is done using the [AutoScale method of the ScalingLimitConfig](https://github.com/libp2p/go-libp2p/blob/master/p2p/host/resource-manager/README.md#scaling-limits). We recommend you [tune your limits as described here](https://github.com/libp2p/go-libp2p/blob/master/p2p/host/resource-manager/README.md#how-to-tune-your-limits).\n\n3. Configure your node to be attack resilient. See [how to respond to an attack and identify misbehaving peers here](https://docs.libp2p.io/concepts/security/dos-mitigation/#responding-to-an-attack). Then setup automatic blocking with fail2ban using canonical libp2p log lines: [guide on how to do so here](https://docs.libp2p.io/concepts/security/dos-mitigation/#how-to-automate-blocking-with-fail2ban).\n\n#### Examples\n* Lotus’ integration can be found in https://github.com/filecoin-project/lotus/blob/master/node/modules/lp2p/rcmgr.go. Lotus reads user-configured resource limits from a limits.json file into the root directory. This allows users to share their resource manager configuration independent of any other configurations.\n* Kubo’s (formerly go-ipfs) integration can be found in https://github.com/ipfs/go-ipfs/blob/master/core/node/libp2p/rcmgr.go. Kubo reads the limits from the IPFS config file.\n\n**Note:** go-libp2p still implements the [connection manager](https://github.com/libp2p/go-libp2p/tree/master/p2p/net/connmgr) mentioned above. The connection manager is a component independent of the resource manager, which aims to keep the number of libp2p connections between a low and a high watermark. When modifying connection limits, it’s advantageous to keep the configuration of these components consistent, i.e., when setting a limit of N concurrent connections in the resource manager, the high watermark should be at most (and ideally slightly less) than N.\n\n### Workarounds\nAlthough there are no workarounds within go-libp2p, some range of attacks can be mitigated using OS tools (like manually blocking malicious peers using `iptables` or `ufw` ) or making use of a load balancer in front of libp2p nodes.\n\nHowever these require direct action \u0026 responsibility on your part and are no substitutes for upgrading go-libp2p. Therefore, we highly recommend upgrading your go-libp2p version for the way it enables tighter scoped limits and provides visibility into and easier reasoning about go-libp2p resource utilization.\n\n### References\nPlease see our DoS Mitigation page for more information on how to incorporate mitigation strategies, monitor your application, and respond to attacks: https://docs.libp2p.io/reference/dos-mitigation/. \n\nPlease see the related disclosure for rust-libp2p: https://github.com/libp2p/rust-libp2p/security/advisories/GHSA-jvgw-gccv-q5p8 and js-libp2p: https://github.com/libp2p/js-libp2p/security/advisories/GHSA-f44q-634c-jvwv\n\n#### For more information\n\nIf you have any questions or comments about this advisory email us at [security@libp2p.io](mailto:security@libp2p.io)"},"Suggestion":{"Summary":"Resource exhaustion in github.com/libp2p/go-libp2p","Description":"go-libp2p is vulnerable to targeted resource exhaustion attacks. These attacks target libp2p's connection, stream, peer, and memory management. An attacker can cause the allocation of large amounts of memory ultimately leading to the process getting killed by the host's operating system. While a connection manager tasked with keeping the number of connections within manageable limits has been part of go-libp2p, this component was designed to handle the regular churn of peers, not a targeted resource exhaustion attack. It's recommend to update to v0.21.0 onwards to get some useful functionality that will help in production environments like better metrics around resource usage, Grafana dashboards around resource usage, allow list support, and default autoscaling limits."}},{"Input":{"Module":"github.com/supranational/blst","Description":"### Impact\n\nBlst versions v0.3.0 through 0.3.10 failed to perform a signature group-check if the call to `SigValidate` in the Go bindings was complemented with a check for infinity. Formally speaking, infinity, or the identity element of the elliptic curve group, is a member of the group, and the group-check should allow it. An initial review of blst users on GitHub did not uncover any use of this function with the complementary infinity check. This optional check was added as a convenience feature because despite infinity being a legitimate member of the group, individual signatures should never be infinite (as it is equivalent to having zero for the secret key), and observing one should raise a flag.\n\n### Description\n\nThe vulnerable function is declared as `SigValidate(sigInfcheck bool) bool` and if called with `sigInfcheck` argument set to `true`, the group-check was omitted. The group-check is required to be performed on untrusted input, because the pairing, the corner-stone operation of the signature scheme, is defined only on points that are members of a specific cyclic group, which is a subset of all the possible points on elliptic curves in question. Submitting an untrusted point outside the group opens up the possibility of accepting an alternative signature for a chosen message.\n\nIt should be noted that the SigValidate call is not the only way to perform the group-checks. There are optional checks integrated into various other verification methods, such as `Verify`, `AggregateVerify`, etc., as well as signature aggregation methods, such as `PairingAggregate*`. The reason why there are multiple options is that the group-check is a relatively expensive operation, and application developers are arguably entitled the freedom to choose when it's performed.\n\n### Patches\n\nThis issue has been resolved in the v0.3.11 release and users are advised to update if their application is affected or alternatively omit the complementary infinity check.\n\n### Credits\n\nA special thanks to Yunjong Jeong (@blukat29) for the discovery and disclosure of this vulnerability.\n"},"Suggestion":{"Summary":"Blst fails to perform group signature validation","Description":"When complemented with a check for infinity, blst skips performing a signature group-check. Formally speaking, infinity is the identity element of the elliptic curve group and as such it is a member of the group, so the group-check should be performed. The fix performs the check even in the presence of infinity."}},{"Input":{"Module":"github.com/argoproj/argo-cd/v2","Description":"### Impact\nAll versions of Argo CD starting with v2.6.0-rc1 have an output sanitization bug which leaks repository access credentials in error messages. These error messages are visible to the user, and they are logged. The error message is visible when a user attempts to create or update an Application via the Argo CD API (and therefor the UI or CLI). The user must have `applications, create` or `applications, update` RBAC access to reach the code which may produce the error.\n\nThe user is not guaranteed to be able to trigger the error message. They may attempt to spam the API with requests to trigger a rate limit error from the upstream repository. \n\nIf the user has `repositories, update` access, they may edit an existing repository to introduce a URL typo or otherwise force an error message. But if they have that level of access, they are probably intended to have access to the credentials anyway.\n\n### Patches\n\nA patch for this vulnerability has been released in the following Argo CD version:\n\n* v2.6.1\n\n### Workarounds\n\nThe only way to completely resolve the issue is to upgrade.\n\n#### Mitigations\n\nTo mitigate the issue, make sure that your repo credentials have only least necessary privileges. For example, the credentials should not have push access, and they should not have access to more resources than what Argo CD actually needs (for example, a whole GitHub org when only one repo is needed).\n\nTo further mitigate the impact of a leaked write-capable repo credential, you could [enable commit signature verification](https://argo-cd.readthedocs.io/en/stable/user-guide/gpg-verification/#enforcing-signature-verification). Even if someone could push a malicious commit, the commit would not by synced.\n\nYou should also enforce least privileges in Argo CD RBAC. Make sure users only have `repositories, update`, `applications, update`, or `applications, create` access if they absolutely need it.\n\n### References\n\n* The problem was initially reported in a [GitHub issue](https://github.com/argoproj/argo-cd/issues/12309)\n* [Argo CD RBAC configuration documentation](https://argo-cd.readthedocs.io/en/stable/operator-manual/rbac/)\n\n### For more information\n\n* Open an issue in [the Argo CD issue tracker](https://github.com/argoproj/argo-cd/issues) or [discussions](https://github.com/argoproj/argo-cd/discussions)\n* Join us on [Slack](https://argoproj.github.io/community/join-slack) in channel #argo-cd\n"},"Suggestion":{"Summary":"Repository access credential leak in github.com/argoproj/argo-cd/v2","Description":"Argo CD has an output sanitization bug which leaks repository access credentials in error messages. These error messages are visible to the user, and they are logged. The error message is visible when a user attempts to create or update an Application via the Argo CD API (and therefor the UI or CLI). The user must have `applications, create` or `applications, update` RBAC access to reach the code which may produce the error. The user is not guaranteed to be able to trigger the error message. They may attempt to spam the API with requests to trigger a rate limit error from the upstream repository. If the user has `repositories, update` access, they may edit an existing repository to introduce a URL typo or otherwise force an error message."}},{"Input":{"Module":"github.com/ethereum/go-ethereum","Description":"### Impact\n\nA vulnerability in the Geth EVM could cause a node to reject the canonical chain. \n\n### Description \n\nA memory-corruption bug within the EVM can cause a consensus error, where vulnerable nodes obtain a different `stateRoot` when processing a maliciously crafted transaction. This, in turn, would lead to the chain being split in two forks.\n\nAll Geth versions supporting the London hard fork are vulnerable (which predates London), so all users should update.\n\nThis bug was exploited on Mainnet at block 13107518, leading to a minority chain split. \n\n### Patches\n\nA patch is included in the `v1.10.8` release.\nThe exact patch to fix the issue is contained within this [commit](https://github.com/ethereum/go-ethereum/pull/23381/commits/4d4879cafd1b3c906fc184a8c4a357137465128f)\n\n### Workarounds\n\nNo workarounds exist, save to update and/or apply the patch commit. \n\n### References. \n\nPost-mortem [write-up](https://github.com/ethereum/go-ethereum/blob/master/docs/postmortems/2021-08-22-split-postmortem.md).\n\n### Credits\n\nThe bug was found by @guidovranken (working for [Sentnl](https://sentnl.io/) during an audit of the [Telos EVM](https://www.telos.net/evm)) and reported via bounty@ethereum.org.\n\n### For more information\nIf you have any questions or comments about this advisory:\n\n* Open an issue in [go-ethereum](https://github.com/ethereum/go-ethereum/)\n* Email us at [security@ethereum.org](mailto:security@ethereum.org)\n"},"Suggestion":{"Summary":"Consensus flaw during block processing in github.com/ethereum/go-ethereum","Description":"A vulnerability in the Geth EVM can cause a node to reject the canonical chain. A memory-corruption bug within the EVM can cause a consensus error, where vulnerable nodes obtain a different stateRoot when processing a maliciously crafted transaction. This, in turn, would lead to the chain being split in two forks."}},{"Input":{"Module":"github.com/cometbft/cometbft","Description":"### Impact\n\nThe mempool maintains two data structures to keep track of outstanding transactions: a list and a map.\nThese two data structures are supposed to be in sync all the time in the sense that the map tracks the index (if any) of the transaction in the list. \n\nUnfortunately, it is possible to have them out of sync. When this happens, the list may contain several copies of the same transaction.\nBecause the map tracks a single index, it is then no longer possible to remove all the copies of the transaction from the list.\nThis happens even if the duplicated transaction is later committed in a block.\nThe only way to remove the transaction is by restarting the node.\n\nThese are the steps to cause the above duplication problem. Everything should happen within one height, that is no `FinalizeBlock` or `BeginBlock` ABCI calls should happen while these steps are reproduced:\n\n1. send transaction tx1 to the target full node via RPC\n2. send N more different transactions to the target full node, where N should be higher than the node's configured value for `cache_size` in `config.toml`\n3. send transaction tx1 again to the target full node\n\nOne of the copies of tx1 is now _stuck_ in the mempool's data structures. Effectively causing a memory leak, and having that node gossiping that transaction to its peers forever.\n\nThe above problem can be repeated on and on until a sizable number of transactions are stuck in the mempool, in order to try to bring down the target node.\n\nThis problem is present in releases: `v0.37.0`, and `v0.37.1`, as well as in `v0.34.28`, and all previous releases of the CometBFT repo. It will be fixed in releases `v0.34.29` and `v0.37.2`.\n\n### Patches\n\nThe PR containing the fix is [here](https://github.com/cometbft/cometbft/pull/890).\n\n\n### Workarounds\n\n* Increasing the value of `cache_size` in `config.toml` makes it very difficult to effectively attack a full node.\n* Not exposing the transaction submission RPC's would mitigate the probability of a successful attack, as the attacker would then have to create a modified (byzantine) full node to be able to perform the attack via p2p.\n\n### References\n\n* [PR](https://github.com/tendermint/tendermint/pull/2778) that introduced the map to track transactions in the mempool.\n* [PR](https://github.com/cometbft/cometbft/pull/890) containing the fix.\n"},"Suggestion":{"Summary":"Denial of service via OOM in github.com/cometbft/cometbft","Description":"A bug in the CometBFT middleware causes the mempool's two data structures to fall out of sync. This can lead to duplicate transactions that cannot be removed, even after they are committed in a block. The only way to remove the transaction is to restart the node. This can be exploited by an attacker to bring down a node by repeatedly submitting duplicate transactions."}},{"Input":{"Module":"github.com/containers/buildah","Description":"A bug was found in Buildah where containers were created with non-empty inheritable Linux process capabilities, creating an atypical Linux environment and enabling programs with inheritable file capabilities to elevate those capabilities to the permitted set during execve(2).\n\nThis bug did not affect the container security sandbox as the inheritable set never contained more capabilities than were included in the container's bounding set.\n"},"Suggestion":{"Summary":"Incorrect default permissions in github.com/containers/buildah","Description":"Containers are created with non-empty inheritable Linux process capabilities, permitting programs with inheritable file capabilities to elevate those capabilities to the permitted set during execve(2). This bug does not affect the container security sandbox, as the inheritable set never contains more capabilities than are included in the container's bounding set."}},{"Input":{"Module":"github.com/fluxcd/helm-controller/api","Description":"Flux controllers within the affected versions range are vulnerable to a denial of service attack. Users that have permissions to change Flux’s objects, either through a Flux source or directly within a cluster, can provide invalid data to fields `.spec.interval` or `.spec.timeout` (and structured variations of these fields), causing the entire object type to stop being processed.\n\nThe issue has two root causes: a) the Kubernetes type `metav1.Duration` not being fully compatible with the Go type `time.Duration` as explained on [upstream report](https://github.com/kubernetes/apimachinery/issues/131); b) lack of validation within Flux to restrict allowed values.\n\n### Workarounds\n\nAdmission controllers can be employed to restrict the values that can be used for fields `.spec.interval` and `.spec.timeout`, however upgrading to the latest versions is still the recommended mitigation.\n\n### Credits\n\nThis issue was reported by Alexander Block (@codablock) through the Flux security mailing list (as [recommended](https://fluxcd.io/security/#report-a-vulnerability)).\n\n### For more information\n\nIf you have any questions or comments about this advisory:\n\n- Open an issue in any of the affected repositories.\n- Contact us at the CNCF Flux channel.\n\n### References\n\n- https://github.com/kubernetes/apimachinery/issues/131\n\n"},"Suggestion":{"Summary":"Denial of service in flux controllers in github.com/fluxcd modules","Description":"Flux controllers are vulnerable to a denial of service attack. Users that have permissions to change Flux's objects, either through a Flux source or directly within a cluster, can provide invalid data to fields `.spec.interval` or `.spec.timeout` (and structured variations of these fields), causing the entire object type to stop being processed. The issue has two root causes: a) the Kubernetes type `metav1.Duration` is not fully compatible with the Go type `time.Duration` as explained in https://github.com/kubernetes/apimachinery/issues/131, and b) a lack of validation within Flux to restrict allowed values."}},{"Input":{"Module":"helm.sh/helm/v3","Description":"Fuzz testing, by Ada Logics and sponsored by the CNCF, identified input to functions in the _strvals_ package that can cause a stack overflow. In Go, a stack overflow cannot be recovered from. Applications that use functions from the _strvals_ package in the Helm SDK can have a Denial of Service attack when they use this package and it panics.\n\n### Impact\n\nThe _strvals_ package contains a parser that turns strings into Go structures. For example, the Helm client has command line flags like `--set`, `--set-string`, and others that enable the user to pass in strings that are merged into the values. The _strvals_ package converts these strings into structures Go can work with. Some string inputs can cause array data structures to be created causing a stack overflow.\n\nApplications that use the _strvals_ package in the Helm SDK to parse user supplied input can suffer a Denial of Service when that input causes a panic that cannot be recovered from.\n\nThe Helm Client will panic with input to `--set`, `--set-string`, and other value setting flags that causes a stack overflow. Helm is not a long running service so the panic will not affect future uses of the Helm client.\n\n### Patches\n\nThis issue has been resolved in 3.10.3. \n\n### Workarounds\n\nSDK users can validate strings supplied by users won't create large arrays causing significant memory usage before passing them to the _strvals_ functions.\n\n### For more information\n\nHelm's security policy is spelled out in detail in our [SECURITY](https://github.com/helm/community/blob/master/SECURITY.md) document.\n\n### Credits\n\nDisclosed by Ada Logics in a fuzzing audit sponsored by CNCF."},"Suggestion":{"Summary":"Denial of service in string value parsing in helm.sh/helm/v3","Description":"Applications that use the strvals package in the Helm SDK to parse user supplied input can suffer a Denial of Service when that input causes an error that cannot be recovered from. The strvals package contains a parser that turns strings into Go structures. For example, the Helm client has command line flags like --set, --set-string, and others that enable the user to pass in strings that are merged into the values. The strvals package converts these strings into structures Go can work with. Some string inputs can cause can cause a stack overflow to be created causing a stack overflow error. Stack overflow errors cannot be recovered from. The Helm Client will panic with input to --set, --set-string, and other value setting flags that causes a stack overflow. Helm is not a long running service so the error will not affect future uses of the Helm client."}},{"Input":{"Module":"github.com/runatlantis/atlantis","Description":"The package github.com/runatlantis/atlantis/server/controllers/events before 0.19.7 is vulnerable to Timing Attack in the webhook event validator code, which does not use a constant-time comparison function to validate the webhook secret. It can allow an attacker to recover this secret as an attacker and then forge webhook events."},"Suggestion":{"Summary":"Timing attack in github.com/runatlantis/atlantis","Description":"Validation of Gitlab requests can leak secrets. The package github.com/runatlantis/atlantis/server/controllers/events uses a non-constant time comparison for secrets while validating a Gitlab request. This allows for a timing attack where an attacker can recover a secret and then forge the request."}},{"Input":{"Module":"github.com/rancher/wrangler","Description":"### Impact\n\nA denial of services (DoS) vulnerability was discovered in Wrangler Git package affecting versions up to and including `v1.0.0`.\n\nSpecially crafted Git credentials can result in a denial of service (DoS) attack on an application that uses Wrangler due to the exhaustion of the available memory and CPU resources. This is caused by a lack of input validation of Git credentials before they are used, which may lead to a denial of service in some cases. This issue can be triggered when accessing both private and public Git repositories. \n\n### Workarounds\n\nA workaround is to sanitize input passed to the Git package to remove potential unsafe and ambiguous characters. Otherwise, the best course of action is to update to a patched Wrangler version.\n\n### Patches\n\nPatched versions include `v1.0.1` and later and the backported tags - `v0.7.4-security1`, `v0.8.5-security1` and `v0.8.11`.\n\n### For more information\n\nIf you have any questions or comments about this advisory:\n\n* Reach out to [SUSE Rancher Security team](https://github.com/rancher/rancher/security/policy) for security related inquiries.\n* Open an issue in [Rancher](https://github.com/rancher/rancher/issues/new/choose) or [Wrangler](https://github.com/rancher/wrangler/issues/new) repository.\n* Verify our [support matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions/) and [product support lifecycle](https://www.suse.com/lifecycle/)."},"Suggestion":{"Summary":"Denial of service when processing Git credentials in github.com/rancher/wrangler","Description":"A denial of service (DoS) vulnerability exists in the Wrangler Git package. Specially crafted Git credentials can result in a denial of service (DoS) attack on an application that uses Wrangler due to the exhaustion of the available memory and CPU resources. This is caused by a lack of input validation of Git credentials before they are used, which may lead to a denial of service in some cases. This issue can be triggered when accessing both private and public Git repositories. A workaround is to sanitize input passed to the Git package to remove potential unsafe and ambiguous characters. Otherwise, the best course of action is to update to a patched Wrangler version."}},{"Input":{"Module":"github.com/containerd/imgcrypt","Description":"Imgcrypt implements a function `CheckAuthorization()` that is supposed to check whether a user is authorized to access an encrypted image given the keys that the user has provided on the command line that would enable decryption of the image. The check is to prevent that a user can start a container from an image that has previously been decrypted by another user on the same system and therefore a decrypted version of the image layers may be already available in the cache locally.\n\nThe failure occurs when an image with a ManifestList is used and the architecture of the local host is not the first one in the ManifestList. In the version prior to the fix, only the first architecture in the list was tested, which may not have its layers available locally (were not pulled) since it cannot be run on the host architecture. Therefore, the verdict on unavailable layers was that the image could be run anticipating that image run failure would occur later due to the layers not being available. However, this verdict to allow the image to run lead to other architectures in the ManifestList be able to run an image without providing keys if that image had previously been decrypted. The fixed version now skips over irrelevant architectures and tests the Manifest of the local architecture, if available.\n\nKnown projects that use the `CheckAuthorization()` of imgcrypt is for example the ctr-enc client tool provided by imgcrypt. In this implementation, the call to `CheckAuthorization()` is used on the client side and could therefore also be easily circumvented by a modified client tool not calling this function.\n\nIn relation to the vulnerability in ctr-enc, affected environments would have to allow different users to invoke ctr-enc indirectly using some sort of management stack that gives user indirect access to ctr-enc.\n\nThe patch has been applied to imgcrypt v1.1.4. Workarounds may include usage of different namespaces for each remote user."},"Suggestion":{"Summary":"Incorrect authorization in github.com/containerd/imgcrypt","Description":"The imgcrypt library provides API extensions for containerd to support encrypted container images and implements the ctd-decoder command line tool for use by containerd to decrypt encrypted container images. The imgcrypt function `CheckAuthorization` is supposed to check whether the current used is authorized to access an encrypted image and prevent the user from running an image that another user previously decrypted on the same system. In versions prior to 1.1.4, a failure occurs when an image with a ManifestList is used and the architecture of the local host is not the first one in the ManifestList. Only the first architecture in the list was tested, which may not have its layers available locally since it could not be run on the host architecture. Therefore, the verdict on unavailable layers was that the image could be run anticipating that image run failure would occur later due to the layers not being available. However, this verdict to allow the image to run enabled other architectures in the ManifestList to run an image without providing keys if that image had previously been decrypted. A patch has been applied to imgcrypt 1.1.4. Workarounds may include usage of different namespaces for each remote user."}},{"Input":{"Module":"github.com/gagliardetto/binary","Description":"### Impact\n\u003e _What kind of vulnerability is it? Who is impacted?_\n\nThe vulnerability is a memory allocation vulnerability that can be exploited to allocate slices in memory with (arbitrary) excessive size value, which can either exhaust available memory or crash the whole program.\n\nWhen using `github.com/gagliardetto/binary` to parse unchecked (or wrong type of) data from untrusted sources of input (e.g. the blockchain) into slices, it's possible to allocate memory with excessive size.\n\nWhen `dec.Decode(\u0026val)` method is used to parse data into a structure that is or contains slices of values, the length of the slice was previously read directly from the data itself without any checks on the size of it, and then a slice was allocated. This could lead to an overflow and an allocation of memory with excessive size value.\n\nExample:\n\n```go\npackage main\n\nimport (\n\t\"github.com/gagliardetto/binary\" // any version before v0.7.1 is vulnerable\n\t\"log\"\n)\n\ntype MyStruct struct {\n\tField1 []byte // field is a slice (could be a slice of any type)\n}\n\nfunc main() {\n\t// Let's assume that the data is coming from the blockchain:\n\tdata := []byte{0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10}\n\t\n\tvar val MyStruct\n\t// - To determine the size of the val.Field1 slice, the decoder will read the length\n\t// of the slice from the data itself without any checks on the size of it.\n\t//\n\t// - This could lead to an allocation of memory with excessive size value.\n\t// Which means: []byte{0x01, 0x02, 0x03, 0x04} will be read as the length\n\t// of the slice (= 67305985) and then an allocation of memory with 67305985 bytes will be made.\n\t//\n\tdec := binary.NewBorshDecoder(data)\n\terr := dec.Decode(\u0026val) // or binary.UnmarshalBorsh(\u0026val, data) or binary.UnmarshalBin(\u0026val, data) etc.\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n}\n```\n\n### Patches\n\u003e _Has the problem been patched? What versions should users upgrade to?_\n\nThe vulnerability has been patched in `github.com/gagliardetto/binary` `v0.7.1`\n\nUsers should upgrade to `v0.7.1` or higher.\n\nTo upgrade to `v0.7.1` or higher, run:\n\n```bash\ngo get github.com/gagliardetto/binary@v0.7.1\n\n# or\n\ngo get github.com/gagliardetto/binary@latest\n```\n\n### Workarounds\n\u003e _Is there a way for users to fix or remediate the vulnerability without upgrading?_\n\nA workaround is not to rely on the `dec.Decode(\u0026val)` function to parse the data, but to use a custom `UnmarshalWithDecoder()` method that reads and checks the length of any slice.\n\n### References\n\u003e _Are there any links users can visit to find out more?_\n\n### For more information\nIf you have any questions or comments about this advisory:\n* Open an issue in [github.com/gagliardetto/binary](https://github.com/gagliardetto/binary)\n* DM me on [twitter](https://twitter.com/immaterial_ink)\n"},"Suggestion":{"Summary":"Resource exhaustion in github.com/gagliardetto/binary","Description":"A memory allocation vulnerability can be exploited to allocate arbitrarily large slices, which can exhaust available memory or crash the program. When parsing data from untrusted sources of input (e.g. the blockchain), the length of the slice to allocate is read directly from the data itself without any checks, which could lead to an allocation of excessive memory."}},{"Input":{"Module":"helm.sh/helm/v3","Description":"A Helm contributor discovered an information disclosure vulnerability using the `getHostByName` template function.\n\n### Impact\n\n`getHostByName` is a Helm template function introduced in Helm v3. The function is able to accept a hostname and return an IP address for that hostname. To get the IP address the function performs a DNS lookup. The DNS lookup happens when used with `helm install|upgrade|template` or when the Helm SDK is used to render a chart.\n\nInformation passed into the chart can be disclosed to the DNS servers used to lookup the IP address. For example, a malicious chart could inject `getHostByName` into a chart in order to disclose values to a malicious DNS server.\n\n### Patches\n\nThe issue has been fixed in Helm 3.11.1.\n\n### Workarounds\n\nPrior to using a chart with Helm verify the `getHostByName` function is not being used in a template to disclose any information you do not want passed to DNS servers.\n\n### For more information\n\nHelm's security policy is spelled out in detail in our [SECURITY](https://github.com/helm/community/blob/master/SECURITY.md) document.\n\n### Credits\n\nDisclosed by Philipp Stehle at SAP."},"Suggestion":{"Summary":"Information disclosure in helm.sh/helm/v3","Description":"An information disclosure vulnerability exists in the `getHostByName` template function. `getHostByName` is a Helm template function introduced in Helm v3. The function is able to accept a hostname and return an IP address for that hostname. To get the IP address the function performs a DNS lookup. The DNS lookup happens when used with `helm install|upgrade|template` or when the Helm SDK is used to render a chart. Information passed into the chart can be disclosed to the DNS servers used to lookup the IP address. For example, a malicious chart could inject `getHostByName` into a chart in order to disclose values to a malicious DNS server."}},{"Input":{"Module":"github.com/ipfs/go-unixfsnode","Description":"## Impact\n\nTrying to read malformed HAMT sharded directories can cause panics and virtual memory leaks.\nIf you are reading untrusted user input, an attacker can then trigger a panic.\n\nThis is caused by a bogus fanout parameter in the HAMT directory nodes.\nThis includes checks returned in [ipfs/go-bitfield GHSA-2h6c-j3gf-xp9r](https://github.com/ipfs/go-bitfield/security/advisories/GHSA-2h6c-j3gf-xp9r), as well as limiting the fanout to \u003c= 1024 (to avoid attempts of arbitrary sized allocations).\n\n## Patches\n- https://github.com/ipfs/go-unixfsnode/commit/91b3d39d33ef0cd2aff2c95d50b2329350944b68\n- https://github.com/ipfs/go-unixfsnode/commit/a4ed723727e0bdc2277158337c2fc0d82802d122\n\n## References\n\n* https://github.com/ipfs/go-unixfs/security/advisories/GHSA-q264-w97q-q778\n* https://github.com/ipfs/go-bitfield/security/advisories/GHSA-2h6c-j3gf-xp9r\n"},"Suggestion":{"Summary":"Denial of service via HAMT decoding panic in github.com/ipfs/go-unixfsnode","Description":"Trying to read malformed HAMT sharded directories can cause panics and virtual memory leaks. If you are reading untrusted user input, an attacker can then trigger a panic. This is caused by a bogus fanout parameter in the HAMT directory nodes. There are no known workarounds (users are advised to upgrade)."}},{"Input":{"Module":"github.com/ipfs/go-unixfs","Description":"### Impact\nTrying to read malformed HAMT sharded directories can cause panics and virtual memory leaks.\nIf you are reading untrusted user input, an attacker can then trigger a panic.\n\nThis is caused by bogus `fanout` parameter in the HAMT directory nodes.\nThis include checks returned in [ipfs/go-bitfield GHSA-2h6c-j3gf-xp9r](https://github.com/ipfs/go-bitfield/security/advisories/GHSA-2h6c-j3gf-xp9r), as well as limiting the `fanout` to `\u003c= 1024` (to avoid attempts of arbitrary sized allocations).\n\n### Patches\n- https://github.com/ipfs/go-unixfs/commit/dbcc43ec3e2db0d01e8d80c55040bba3cf22cb4b\n\n### Workarounds\nDo not feed untrusted user data to the decoding functions.\n\n### References\n- https://github.com/ipfs/go-bitfield/security/advisories/GHSA-2h6c-j3gf-xp9r\n"},"Suggestion":{"Summary":"Denial of service via HAMT decoding panic in github.com/ipfs/go-unixfs","Description":"Trying to read malformed HAMT sharded directories can cause panics and virtual memory leaks. If you are reading untrusted user input, an attacker can then trigger a panic. This is caused by bogus `fanout` parameter in the HAMT directory nodes. A workaround is to not feed untrusted user data to the decoding functions."}},{"Input":{"Module":"github.com/google/go-attestation","Description":"### Impact\n\nAn improper input validation vulnerability in go-attestation before 0.4.0 allows local users to provide a maliciously-formed Quote over no/some PCRs, causing `AKPublic.Verify` to succeed despite the inconsistency. Subsequent use of the same set of PCR values in `Eventlog.Verify` lacks the authentication performed by quote verification, meaning a local attacker could couple this vulnerability with a maliciously-crafted TCG log in `Eventlog.Verify` to spoof events in the TCG log, hence defeating remotely-attested measured-boot.\n\n### Patches\nThis issue is resolved in version 0.4.0. If your usage of this library verifies PCRs using multiple quotes, make sure to use the new method `AKPublic.VerifyAll()` instead of `AKPublic.Verify`."},"Suggestion":{"Summary":"Improper input validation in github.com/google/go-attestation","Description":"A local attacker can defeat remotely-attested measured boot. Improper input validation in AKPublic.Verify can cause it to succeed when provided with a maliciously-formed Quote over no/some PCRs. Subsequent use of the same set of PCR values in Eventlog.Verify lacks the authentication performed by quote verification, meaning a local attacker can couple this vulnerability with a maliciously-formed TCG log in Eventlog.Verify to spoof events in the TCG log, defeating remotely-attested measured-boot."}},{"Input":{"Module":"github.com/containerd/containerd","Description":"### Impact\n\nA bug was found in containerd where supplementary groups are not set up properly inside a container. If an attacker has direct access to a container and manipulates their supplementary group access, they may be able to use supplementary group access to bypass primary group restrictions in some cases, potentially gaining access to sensitive information or gaining the ability to execute code in that container.\n\nDownstream applications that use the containerd client library may be affected as well.\n\n### Patches\nThis bug has been fixed in containerd v1.6.18 and v.1.5.18. Users should update to these versions and recreate containers to resolve this issue. Users who rely on a downstream application that uses containerd's client library should check that application for a separate advisory and instructions.\n\n### Workarounds\n\nEnsure that the `\"USER $USERNAME\"` Dockerfile instruction is not used. Instead, set the container entrypoint to a value similar to `ENTRYPOINT [\"su\", \"-\", \"user\"]` to allow `su` to properly set up supplementary groups.\n\n### References\n\n- https://www.benthamsgaze.org/2022/08/22/vulnerability-in-linux-containers-investigation-and-mitigation/\n- Docker/Moby: CVE-2022-36109, fixed in Docker 20.10.18\n- CRI-O: CVE-2022-2995, fixed in CRI-O 1.25.0\n- Podman: CVE-2022-2989, fixed in Podman 3.0.1 and 4.2.0\n- Buildah: CVE-2022-2990, fixed in Buildah 1.27.1\n\nNote that CVE IDs apply to a particular implementation, even if an issue is common.\n\n### For more information\n\nIf you have any questions or comments about this advisory:\n\n* Open an issue in [containerd](https://github.com/containerd/containerd/issues/new/choose)\n* Email us at [security@containerd.io](mailto:security@containerd.io)\n\nTo report a security issue in containerd:\n* [Report a new vulnerability](https://github.com/containerd/containerd/security/advisories/new)\n* Email us at [security@containerd.io](mailto:security@containerd.io)"},"Suggestion":{"Summary":"Privilege escalation via supplementary groups in github.com/containerd/containerd","Description":"Supplementary groups are not set up properly inside a container. If an attacker has direct access to a container and manipulates their supplementary group access, they may be able to use supplementary group access to bypass primary group restrictions in some cases and potentially escalate privileges in the container. Uses of the containerd client library may also have improperly setup supplementary groups."}},{"Input":{"Module":"github.com/ipfs/go-merkledag","Description":"### Impact\n\nA `ProtoNode` may be modified in such a way as to cause various encode errors which will trigger a panic on common method calls that don't allow for error returns.\n\nA `ProtoNode` should only be able to encode to valid DAG-PB, attempting to encode invalid DAG-PB forms will result in an error from the codec. Manipulation of an existing (newly created or decoded) `ProtoNode` using the modifier methods did not account for certain states that would place the `ProtoNode` into an unencodeable form.\n\nDue to conformance with the [`github.com/ipfs/go-block-format#Block`](https://pkg.go.dev/github.com/ipfs/go-block-format#Block) and [`github.com/ipfs/go-ipld-format#Node`](https://pkg.go.dev/github.com/ipfs/go-ipld-format#Node) interfaces, certain methods, which internally require a re-encode if state has changed, will panic due to the inability to return an error.\n\nAdditionally, use of the `ProtoNode#SetCidBuilder()` method to set a non-functioning `CidBuilder` (such as one that refers to a multihash where an implementation of that hash function is not available) may cause the same methods to panic as a new CID is required but cannot be created.\n\n### Patches\n\nReleases involving fixes for this issue are [v0.8.0](https://github.com/ipfs/go-merkledag/releases/tag/v0.8.0) and [v0.8.1](https://github.com/ipfs/go-merkledag/releases/tag/v0.8.1). The recommended minimum version is **v0.8.1**.\n\n* Additional checks are performed on `ProtoNode` state changes to avoid the possibility of creating unencodeable forms, errors are returned where this is the case.\n* The builder passed in to `SetCidBuilder()` is inspected to attempt to determine if it is usable to generate CIDs, otherwise an error is returned.\n* The panics have been removed and replaced with default values (empty byte slice for `RawData()` and a default zero-bytes DAG-PB CID for methods involving CIDs).\n\n### Workarounds\n\nThese workarounds are available when using impacted versions to avoid panic conditions, and may be generally appropriate in order to provide meaningful feedback to users and avoid generating bad, or unexpected encoded data:\n\n* Sanitise inputs when allowing user-input to set a new `CidBuilder` on a `ProtoNode`.\n* Sanitise `Tsize` (`Link#Size`) values such that they are a reasonable byte-size for sub-DAGs where derived from user-input.\n\n### References\n\n* https://github.com/ipfs/kubo/issues/9297\n* https://github.com/ipfs/go-merkledag/issues/90\n* https://github.com/ipfs/go-merkledag/releases/tag/v0.8.0\n* https://github.com/ipfs/go-merkledag/pull/91\n* https://github.com/ipfs/go-merkledag/pull/92\n* https://github.com/ipfs/go-merkledag/pull/93\n* https://github.com/ipfs/go-merkledag/releases/tag/v0.8.1\n\n\n### Credit\n\nThanks to [@mrd0ll4r](https://github.com/mrd0ll4r) for reporting the original error to Kubo!"},"Suggestion":{"Summary":"Panic in github.com/ipfs/go-merkledag","Description":"A ProtoNode may be modified in such a way as to cause various encode errors which will trigger a panic on common method calls that don't allow for error returns. Additionally, use of the ProtoNode.SetCidBuilder() method to set non-functioning CidBuilder (such as one that refers to a multihash where an implementation of that hash function is not available) may cause the same methods to panic as a new CID is required but cannot be created."}},{"Input":{"Module":"github.com/prometheus/client_golang","Description":"This is the Go client library for Prometheus. It has two separate parts, one for instrumenting application code, and one for creating clients that talk to the Prometheus HTTP API. client_golang is the instrumentation library for Go applications in Prometheus, and the promhttp package in client_golang provides tooling around HTTP servers and clients.\n\n### Impact\n\nHTTP server susceptible to a Denial of Service through unbounded cardinality, and potential memory exhaustion, when handling requests with non-standard HTTP methods.\n\n### Affected Configuration\n\nIn order to be affected, an instrumented software must\n\n* Use any of `promhttp.InstrumentHandler*` middleware except `RequestsInFlight`.\n* Do not filter any specific methods (e.g GET) before middleware.\n* Pass metric with `method` label name to our middleware.\n* Not have any firewall/LB/proxy that filters away requests with unknown `method`.\n\n### Patches\n\n* https://github.com/prometheus/client_golang/pull/962\n* https://github.com/prometheus/client_golang/pull/987\n\n### Workarounds\n\nIf you cannot upgrade to [v1.11.1 or above](https://github.com/prometheus/client_golang/releases/tag/v1.11.1), in order to stop being affected you can:\n\n* Remove `method` label name from counter/gauge you use in the InstrumentHandler.\n* Turn off affected promhttp handlers.\n* Add custom middleware before promhttp handler that will sanitize the request method given by Go http.Request.\n* Use a reverse proxy or web application firewall, configured to only allow a limited set of methods.\n\n### For more information\n\nIf you have any questions or comments about this advisory:\n\n* Open an issue in https://github.com/prometheus/client_golang\n* Email us at `prometheus-team@googlegroups.com`\n"},"Suggestion":{"Summary":"Uncontrolled resource consumption in github.com/prometheus/client_golang","Description":"The Prometheus client_golang HTTP server is vulnerable to a denial of service attack when handling requests with non-standard HTTP methods. In order to be affected, an instrumented software must use any of the promhttp.InstrumentHandler* middleware except `RequestsInFlight`; not filter any specific methods (e.g GET) before middleware; pass a metric with a \"method\" label name to a middleware; and not have any firewall/LB/proxy that filters away requests with unknown \"method\"."}},{"Input":{"Module":"github.com/containers/storage","Description":"A deadlock vulnerability was found in `github.com/containers/storage` in versions before 1.28.1. When a container image is processed, each layer is unpacked using `tar`. If one of those layers is not a valid `tar` archive this causes an error leading to an unexpected situation where the code indefinitely waits for the tar unpacked stream, which never finishes. An attacker could use this vulnerability to craft a malicious image, which when downloaded and stored by an application using containers/storage, would then cause a deadlock leading to a Denial of Service (DoS)."},"Suggestion":{"Summary":"Denial of service via deadlock in github.com/containers/storage","Description":"Due to a goroutine deadlock, using github.com/containers/storage/pkg/archive.DecompressStream on a xz archive returns a reader which will hang indefinitely when Close is called. An attacker can use this to cause denial of service if they are able to cause the caller to attempt to decompress an archive they control."}},{"Input":{"Module":"github.com/pion/webrtc/v3","Description":"### Impact\nData channel communication was incorrectly allowed with users who have failed DTLS certificate verification.\n\nThis attack requires \n* Attacker knows the ICE password. \n* Only take place during PeerConnection handshake.\n\nThis attack can be detected by monitoring `PeerConnectionState` in all versions of Pion WebRTC.\n\n### Patches\nUsers should upgrade to v3.0.15. \n\nThe exact patch is https://github.com/pion/webrtc/commit/545613dcdeb5dedb01cce94175f40bcbe045df2e\n\n### Workarounds\nUsers should listen for when `PeerConnectionState` changes to `PeerConnectionStateFailed`. When it enters this state users should not continue using the PeerConnection.\n\n### For more information\nIf you have any questions or comments about this advisory:\n* Open an issue in https://github.com/pion/webrtc\n* Email us at [team@pion.ly](mailto:team@pion.ly)\n\nThank you to https://github.com/Gaukas for discovering this."},"Suggestion":{"Summary":"Authorization bypass in github.com/pion/webrtc/v3","Description":"Due to improper error handling, DTLS connections were not killed when certificate verification failed, causing users who did not check the connection state to continue to use the connection. This could allow allow an attacker which holds the ICE password, but not a valid certificate, to bypass this restriction."}},{"Input":{"Module":"github.com/russellhaering/gosaml2","Description":"### Impact\nSAML Service Providers using this library for SAML authentication support are likely susceptible to Denial of Service attacks. A bug in this library enables attackers to craft a `deflate`-compressed request which will consume significantly more memory during processing than the size of the original request. This may eventually lead to memory exhaustion and the process being killed.\n\n### Mitigation\nThe maximum compression ratio achievable with `deflate` is 1032:1, so by limiting the size of bodies passed to gosaml2, limiting the rate and concurrency of calls, and ensuring that lots of memory is available to the process it _may_ be possible to help Go's garbage collector \"keep up\".\n\nImplementors are encouraged not to rely on this.\n\n### Patches\nThis issue is addressed in v0.9.0"},"Suggestion":{"Summary":"Denial of service via deflate decompression bomb in github.com/russellhaering/gosaml2","Description":"A bug in SAML authentication library can result in Denial of Service attacks. Attackers can craft a `deflate`-compressed request which will consume significantly more memory during processing than the size of the original request. This may eventually lead to memory exhaustion and the process being killed."}},{"Input":{"Module":"github.com/hashicorp/vault","Description":"HashiCorp Vault's implementation of Shamir's secret sharing used precomputed table lookups, and was vulnerable to cache-timing attacks. An attacker with access to, and the ability to observe a large number of unseal operations on the host through a side channel may reduce the search space of a brute force effort to recover the Shamir shares. Fixed in Vault 1.13.1, 1.12.5, and 1.11.9."},"Suggestion":{"Summary":"Cache-timing attacks in Shamir's secret sharing in github.com/hashicorp/vault","Description":"HashiCorp Vault's implementation of Shamir's secret sharing uses precomputed table lookups, and is vulnerable to cache-timing attacks. An attacker with access to, and the ability to observe a large number of unseal operations on the host through a side channel may reduce the search space of a brute force effort to recover the Shamir shares."}},{"Input":{"Module":"github.com/rancher/wrangler","Description":"### Impact\n\nA command injection vulnerability was discovered in Wrangler's Git package affecting versions up to and including `v1.0.0`.\n\nWrangler's Git package uses the underlying Git binary present in the host OS or container image to execute Git operations. Specially crafted commands can be passed to Wrangler that will change their behavior and cause confusion when executed through Git, resulting in command injection in the underlying host.\n\n### Workarounds\n\nA workaround is to sanitize input passed to the Git package to remove potential unsafe and ambiguous characters. Otherwise, the best course of action is to update to a patched Wrangler version.\n\n### Patches\n\nPatched versions include `v1.0.1` and later and the backported tags - `v0.7.4-security1`, `v0.8.5-security1` and `v0.8.11`.\n\n### For more information\n\nIf you have any questions or comments about this advisory:\n\n* Reach out to [SUSE Rancher Security team](https://github.com/rancher/rancher/security/policy) for security related inquiries.\n* Open an issue in [Rancher](https://github.com/rancher/rancher/issues/new/choose) or [Wrangler](https://github.com/rancher/wrangler/issues/new) repository.\n* Verify our [support matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions/) and [product support lifecycle](https://www.suse.com/lifecycle/)."},"Suggestion":{"Summary":"Command injection in github.com/rancher/wrangler","Description":"A command injection vulnerability exists in the Wrangler Git package. Specially crafted commands can be passed to Wrangler that will change their behavior and cause confusion when executed through Git, resulting in command injection in the underlying host. A workaround is to sanitize input passed to the Git package to remove potential unsafe and ambiguous characters. Otherwise, the best course of action is to update to a patched Wrangler version."}},{"Input":{"Module":"github.com/open-policy-agent/opa","Description":"### Impact\n\nThe Rego compiler provides a (deprecated) `WithUnsafeBuiltins` function, which allows users to provide a set of built-in functions that should be deemed unsafe — and as such rejected — by the compiler if encountered in the policy compilation stage. A bypass of this protection has been found, where the use of the `with` keyword to mock such a built-in function (a feature introduced in OPA v0.40.0), isn’t taken into account by `WithUnsafeBuiltins`.\n\nThe same method is exposed via `rego.UnsafeBuiltins` in the `github.com/open-policy-agent/opa/rego` package.\n\nWhen provided e.g. the `http.send` built-in function to `WithUnsafeBuiltins`, the following policy would still compile, and call the `http.send` function with the arguments provided to the `is_object` function when evaluated:\n\n```rego\npackage policy\n\nfoo := is_object({\n \"method\": \"get\", \n \"url\": \"https://www.openpolicyagent.org\"\n})\n\nallow := r {\n r := foo with is_object as http.send\n}\n```\n\nBoth built-in functions and user provided (i.e. custom) functions are mockable using this construct.\n\nIn addition to `http.send`, the `opa.runtime` built-in function is commonly considered unsafe in integrations where policy provided by untrusted parties is evaluated, as it risks exposing configuration, or environment variables, potentially carrying sensitive information.\n\n#### Affected Users\n\n**All of these conditions have to be met** to create an adverse effect:\n\n* Use the Go API for policy evaluation (not the OPA server, or the Go SDK)\n* Make use of the `WithUnsafeBuiltins` method in order to deny certain built-in functions, like e.g. `http.send`, from being used in policy evaluation.\n* Allow policy evaluation of policies provided by untrusted parties.\n* The policies evaluated include the `with` keyword to rewrite/mock a built-in, or custom, function to that of another built-in function, such as `http.send`.\n\n**Additionally, the OPA Query API** is affected:\n* If the OPA [Query API](https://www.openpolicyagent.org/docs/latest/rest-api/#query-api) is exposed to the public, and it is relied on `http.send` to be unavailable in that context. Exposing the OPA API to the public without proper [authentication and authorization](https://www.openpolicyagent.org/docs/latest/security/#authentication-and-authorization) in place is generally advised against.\n\n### Patches\nv0.43.1, v0.44.0\n\n### Workarounds\n\nThe `WithUnsafeBuiltins` function has been considered deprecated since the introduction of the [capabilities](https://www.openpolicyagent.org/docs/latest/deployments/#capabilities) feature in OPA v0.23.0 . While the function was commented as deprecated, the format of the comment was however not following the [convention](https://zchee.github.io/golang-wiki/Deprecated/) for deprecated functions, and might not have been picked up by tooling like editors. This has now been fixed. Users are still encouraged to use the capabilities feature over the deprecated `WithUnsafeBuiltins` function.\n\n**If you cannot upgrade**, consider using capabilities instead:\n\nCode like this using the `github.com/open-policy-agent/opa/ast` package:\n```go\n// VULNERABLE with OPA \u003c= 0.43.0\nunsafeBuiltins := map[string]struct{}{\n\tast.HTTPSend.Name: struct{}{},\n}\ncompiler := ast.NewCompiler().WithUnsafeBuiltins(unsafeBuiltins)\n```\n\nneeds to be changed to this:\n```go\ncaps := ast.CapabilitiesForThisVersion()\nvar j int\nfor i, bi := range caps.Builtins {\n\tif bi.Name == ast.HTTPSend.Name {\n\t\tj = i\n\t\tbreak\n\t}\n}\ncaps.Builtins[j] = caps.Builtins[len(caps.Builtins)-1] // put last element into position j\ncaps.Builtins = caps.Builtins[:len(caps.Builtins)-1] // truncate slice\n\ncompiler := ast.NewCompiler().WithCapabilities(caps)\n```\n\nWhen using the `github.com/open-policy-agent/opa/rego` package:\n\n```go\n// VULNERABLE with OPA \u003c= 0.43.0\nr := rego.New(\n\t// other options omitted\n\trego.UnsafeBuiltins(map[string]struct{}{ast.HTTPSend.Name: struct{}{}}),\n)\n```\n\nneeds to be changed to\n```go\nr := rego.New(\n\t// other options omitted\n\trego.Capabilities(caps),\n)\n```\nwith `caps` defined above.\n\nNote that in the process, some error messages will change: `http.send` in this example will no longer be \"unsafe\" and thus forbidden, but it will simply become an \"unknown function\".\n\n### References\n\n* Fix commit on `main`: https://github.com/open-policy-agent/opa/commit/25a597bc3f4985162e7f65f9c36599f4f8f55823\n* Fix commit in 0.43.1 release: https://github.com/open-policy-agent/opa/commit/3e8c754ed007b22393cf65e48751ad9f6457fee8, release page for 0.43.1: https://github.com/open-policy-agent/opa/releases/tag/v0.43.1\n* Function mocking feature introduced in https://github.com/open-policy-agent/opa/pull/4540 and https://github.com/open-policy-agent/opa/pull/4616 \n* Documentation on the [capabilities](https://www.openpolicyagent.org/docs/latest/deployments/#capabilities) feature, which is the preferred way of providing a list of allowed built-in functions. The capabilities feature is **not** affected by this vulnerability.\n\n### For more information\n\nIf you have any questions or comments about this advisory:\n\n* Open an issue in [Community Discussions](https://github.com/open-policy-agent/community/discussions/categories/opa-and-rego)\n* Ask in Slack: https://slack.openpolicyagent.org/\n"},"Suggestion":{"Summary":"Protection bypass in github.com/open-policy-agent/opa","Description":"Open Policy Agent (OPA) is an open source, general-purpose policy engine. The Rego compiler provides a (deprecated) `WithUnsafeBuiltins` function, which allows users to provide a set of built-in functions that should be deemed unsafe and rejected by the compiler if encountered in the policy compilation stage. A bypass of this protection is possible when using the `with` keyword to mock a built-in function that isn't taken into account by `WithUnsafeBuiltins`."}},{"Input":{"Module":"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp","Description":"### Impact\n\nThe [v0.38.0](https://github.com/open-telemetry/opentelemetry-go-contrib/releases/tag/v1.13.0) release of [`go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp`](https://github.com/open-telemetry/opentelemetry-go-contrib/blob/463c2e7cd69d25f40b0a595b05394eeb26c68ae2/instrumentation/net/http/otelhttp/handler.go#L218) uses the [`httpconv.ServerRequest`](https://github.com/open-telemetry/opentelemetry-go/blob/v1.12.0/semconv/internal/v2/http.go#L159) function to annotate metric measurements for the `http.server.request_content_length`, `http.server.response_content_length`, and `http.server.duration` instruments.\n\nThe `ServerRequest` function sets the `http.target` attribute value to be the whole request URI (including the query string)[^1]. The metric instruments do not \"forget\" previous measurement attributes when `cumulative` temporality is used, this means the cardinality of the measurements allocated is directly correlated with the unique URIs handled. If the query string is constantly random, this will result in a constant increase in memory allocation that can be used in a denial-of-service attack.\n\nPseudo-attack:\n```\nfor infinite loop {\n r := generate_random_string()\n do_http_request(\"/some/path?random=\"+r)\n}\n```\n\n### Patches\n- `go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp` - v0.39.0\n- `go.opentelemetry.io/contrib/instrumentation/github.com/astaxie/beego/otelbeego` - v0.39.0\n\n[^1]: https://github.com/open-telemetry/opentelemetry-go/blob/6cb5718eaaed5c408c3bf4ad1aecee5c20ccdaa9/semconv/internal/v2/http.go#L202-L208"},"Suggestion":{"Summary":"Denial of service in go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp","Description":"The otelhttp package of opentelemetry-go-contrib is vulnerable to a denial-of-service attack. The otelhttp package uses the httpconv.ServerRequest function to annotate metric measurements for the http.server.request_content_length, http.server.response_content_length, and http.server.duration instruments. The ServerRequest function sets the http.target attribute value to be the whole request URI (including the query string). The metric instruments do not \"forget\" previous measurement attributes when \"cumulative\" temporality is used, meaning that the cardinality of the measurements allocated is directly correlated with the unique URIs handled. If the query string is constantly random, this will result in a constant increase in memory allocation that can be used in a denial-of-service attack."}},{"Input":{"Module":"github.com/cometbft/cometbft","Description":"### Impact\nAn internal modification to the way struct `PeerState` is serialized to JSON introduced a deadlock when new function MarshallJSON is called. This function can be called from two places:\n\n1. Via logs\n * Setting the `consensus` logging module to \"debug\" level (should not happen in production), and\n * Setting the log output format to JSON\n2. Via RPC `dump_consensus_state` \n\nCase 1 above, which should not be hit in production, will eventually hit the deadlock in most goroutines, effectively halting the node.\n\nIn case 2, only the data structures related to the first peer will be deadlocked, together with the thread(s) dealing with the RPC request(s). This means that only one of the channels of communication to the node's peers will be blocked. Eventually the peer will timeout and excluded from the list (typically after 2 minutes). The goroutines involved in the deadlock will not be garbage collected, but they will not interfere with the system after the peer is excluded.\n\nThe theoretical worst case for case 2, is a network with only two validator nodes. In this case, each of the nodes only has one `PeerState` struct. If `dump_consensus_state` is called in either node (or both), the chain will halt until the peer connections time out, after which the nodes will reconnect (with different `PeerState` structs) and the chain will progress again. Then, the same process can be repeated.\n\nAs the number of nodes in a network increases, and thus, the number of peer struct each node maintains, the possibility of reproducing the perturbation visible with 2 nodes decreases. Only the first `PeerState` struct will deadlock, and not the others (RPC `dump_consensus_state` accesses them in a for loop, so the deadlock at the first iteration causes the rest of the iterations of that \"for\" loop to never be reached).\n\nThis regression was introduced in versions `v0.34.28` and `v0.37.1`, and will be fixed in `v0.34.29` and `v0.37.2`.\n\n### Patches\nThe PR containing the fix is [here](https://github.com/cometbft/cometbft/pull/865), and the corresponding issue is [here](https://github.com/cometbft/cometbft/pull/863)\n\n### Workarounds\nFor case 1 (hitting the deadlock via logs)\n* either don't set the log output to \"json\", leave at \"plain\",\n* or don't set the consensus logging module to \"debug\", leave it at \"info\" or higher.\n\nFor case 2 (hitting the deadlock via RPC `dump_consensus_state`)\n* do not expose `dump_consensus_state` RPC endpoint to the public internet (e.g., via rules in your nginx setup)\n\n### References\n\n* [Issue](https://github.com/cometbft/cometbft/pull/863) that introduced the deadlock\n* [Issue](https://github.com/cometbft/cometbft/pull/524) reporting the bug via logs\n"},"Suggestion":{"Summary":"Deadlock in github.com/cometbft/cometbft/consensus","Description":"An internal modification to the way PeerState is serialized to JSON introduced a deadlock when the new function MarshalJSON is called. This function can be called in two ways. The first is via logs, by setting the consensus logging module to \"debug\" level (which should not happen in production), and setting the log output format to JSON. The second is via RPC dump_consensus_state."}},{"Input":{"Module":"helm.sh/helm/v3","Description":"Fuzz testing, by Ada Logics and sponsored by the CNCF, identified input to functions in the `_repo_` package that can cause a segmentation violation. Applications that use functions from the `_repo_` package in the Helm SDK can have a Denial of Service attack when they use this package and it panics.\n\n### Impact\n\nThe `_repo_` package contains a handler that processes the index file of a repository. For example, the Helm client adds references to chart repositories where charts are managed. The `_repo_` package parses the index file of the repository and loads it into structures Go can work with. Some index files can cause array data structures to be created causing a memory violation.\n\nApplications that use the `_repo_` package in the Helm SDK to parse an index file can suffer a Denial of Service when that input causes a panic that cannot be recovered from.\n\nThe Helm Client will panic with an index file that causes a memory violation panic. Helm is not a long running service so the panic will not affect future uses of the Helm client.\n\n### Patches\n\nThis issue has been resolved in 3.10.3. \n\n### Workarounds\n\nSDK users can validate index files that are correctly formatted before passing them to the `_repo_` functions.\n\n### For more information\n\nHelm's security policy is spelled out in detail in our [SECURITY](https://github.com/helm/community/blob/master/SECURITY.md) document.\n\n### Credits\n\nDisclosed by Ada Logics in a fuzzing audit sponsored by CNCF."},"Suggestion":{"Summary":"Denial of service via repository index file in helm.sh/helm/v3","Description":"Applications that use the repo package in the Helm SDK to parse an index file can suffer a Denial of Service when that input causes a panic that cannot be recovered from. The repo package contains a handler that processes the index file of a repository. For example, the Helm client adds references to chart repositories where charts are managed. The repo package parses the index file of the repository and loads it into memory. Some index files can cause array data structures to be created causing a memory violation. The Helm Client will panic with an index file that causes a memory violation panic. Helm is not a long running service so the panic will not affect future uses of the Helm client."}},{"Input":{"Module":"github.com/tendermint/tendermint","Description":"### Description \n\nTendermint Core v0.34.0 introduced a new way of handling evidence of misbehavior. As part of this, [we added a new `Timestamp` field to `Evidence` structs](https://github.com/tendermint/tendermint/pull/5219). This timestamp would be calculated using the same algorithm that is used when a block is created and proposed. (This algorithm relies on the timestamp of the last commit from this specific block.) \n\nIn Tendermint Core v0.34.0-v0.34.2, the `consensus` reactor is responsible for forming `DuplicateVoteEvidence` whenever double signs are observed. However, the current block is still “in flight” when it is being formed by the `consensus` reactor. It hasn’t been finalized through network consensus yet. This means that different nodes in the network may observe different “last commits” when assigning a timestamp to `DuplicateVoteEvidence.`\n\nIn turn, different nodes could form `DuplicateVoteEvidence` objects at the same height but with different timestamps. One `DuplicateVoteEvidence` object (with one timestamp) will then eventually get finalized in the block, but this means that any `DuplicateVoteEvidence` with a different timestamp is considered invalid. Any node that formed invalid `DuplicateVoteEvidence` will continue to propose invalid evidence; its peers may see this, and choose to disconnect from this node. This bug means that double signs are DoS vectors in Tendermint Core v0.34.0-v0.34.2. \n\nTendermint Core v0.34.3 is a security release which fixes this bug. As of v0.34.3, `DuplicateVoteEvidence` is no longer formed by the `consensus` reactor; rather, the `consensus` reactor passes the `Vote`s themselves into the `EvidencePool`, which is now responsible for forming `DuplicateVoteEvidence`. The `EvidencePool` has timestamp info that should be consistent across the network, which means that `DuplicateVoteEvidence` formed in this reactor should have consistent timestamps. \n\nThis release changes the API between the `consensus` and `evidence` reactors. \n\n### Impact\n\nThis is a denial-of-service vector which impacts networks running Tendermint Core v0.34.0 - v0.34.2.\n\n### Remediation\n\nThis problem has been patched in Tendermint Core v0.34.3. Networks running impacted versions of Tendermint Core should update immediately.\n\n### Workarounds\n\nThere are no workarounds, other than upgrading to a patched version of Tendermint Core.\n\n### Credits \n\n* Crypto.com (@cyril-crypto, @brianatcrypto, @tomtau and @yihuang) for finding and submitting this vulnerability\n* @melekes and @cmwaters for identifying the root cause and patching the problem \n\n### For more information\n\nIf you have any questions or comments about this advisory:\n* Open an issue in [tendermint/tendermint](https://github.com/tendermint/tendermint)\n* Email us at [security@tendermint.com](mailto:security@tendermint.com)"},"Suggestion":{"Summary":"Uncontrolled resource consumption during consensus in github.com/tendermint/tendermint","Description":"Mishandling of timestamps during consensus process can cause a denial of service. While reaching consensus, different tendermint nodes can observe a different timestamp for a consensus evidence. This mismatch can cause the evidence to be invalid, upon which the node producing the evidence will be asked to generate a new evidence. This new evidence will be the same, which means it will again be rejected by other nodes involved in the consensus. This loop will continue until the peer nodes decide to disconnect from the node producing the evidence."}},{"Input":{"Module":"github.com/libp2p/go-libp2p","Description":"### Summary\nIn go-libp2p, by using signed peer records a malicious actor can store an arbitrary amount of data in a remote node’s memory. This memory does not get garbage collected and so the victim can run out of memory and crash.\n\nIt is feasible to do this at scale. An attacker would have to transfer ~1/2 as much memory it wants to occupy (2x amplification factor).\n\nThe attacker can perform this attack over time as the target node’s memory will not be garbage collected.\n\nThis can occur because when a signed peer record is received, only the signature validity check is performed but the sender signature is not checked. Signed peer records from randomly generated peers can be sent by a malicious actor. A target node will accept the peer record as long as the signature is valid, and then stored in the peer store.\n\nThere is cleanup logic in the peer store that cleans up data when a peer disconnects, but this cleanup is never triggered for the fake peer (from which signed peer records were accepted) because it was never “connected”.\n\n### Impact\nIf users of go-libp2p in production are not monitoring memory consumption over time, it could be a silent attack i.e. the attacker could bring down nodes over a period of time (how long depends on the node resources i.e. a go-libp2p node on a virtual server with 4 gb of memory takes about 90 sec to bring down; on a larger server, it might take a bit longer.)\n\n### Patches\nUpdate your go-libp2p dependency to the latest release, v0.30.0 at the time of writing.\n\nIf you'd like to stay on the 0.27.x release, we strongly recommend users to update to go-libp2p [0.27.7](https://github.com/libp2p/go-libp2p/releases/tag/v0.27.7). Though this OOM issue was fixed in 0.27.4, there were subsequent patch releases afterwards (important fixes for other issues unrelated to the OOM).\n\n### Workarounds\nNone"},"Suggestion":{"Summary":"libp2p nodes vulnerable to OOM attack","Description":"A malicious actor can store an arbitrary amount of data in the memory of a remote node by sending the node a message with a signed peer record. Signed peer records from randomly generated peers can be sent by a malicious actor. This memory does not get garbage collected and so the remote node can run out of memory (OOM)."}},{"Input":{"Module":"github.com/goreleaser/nfpm/v2","Description":"### Summary\nWhen building packages directly from source control, file permissions on the checked-in files are not maintained. \n\n### Details\nWhen building packages directly from source control, file permissions on the checked-in files are not maintained. When nfpm packaged the files (without extra config for enforcing its own permissions) files could go out with bad permissions (chmod 666 or 777).\n\n### PoC\nCreate a default nfpm structure. \n\nWithin the test folder, create 3 files named `chmod-XXX.sh`. Each script has file \npermissions set corresponding with their file names (`chmod-777.sh` = `chmod 777`). Below each \nfile and permissions can be seen.\n\n```console\n$ ls -lart test \ntotal 24\n-rwxrwxrwx 1 user group 11 May 19 13:15 chmod-777.sh\n-rw-rw-rw- 1 user group 11 May 19 13:16 chmod-666.sh\ndrwxr-xr-x 5 user group 160 May 19 13:19 .\n-rw-rw-r-- 1 user group 11 May 19 13:19 chmod-664.sh\ndrwxr-xr-x 10 user group 320 May 19 13:29 ..\n```\n\nBelow is the snippet nfpm configuration file of the contents of the package. The test folder \nand files has no extra config for enforcing permissions. \n\n```yaml\ncontents:\n- src: foo-binary\n dst: /usr/bin/bar\n- src: bar-config.conf\n dst: /etc/foo-binary/bar-config.conf\n type: config\n- src: test\n dst: /etc/test/scripts\n```\n\nThe next step is to create a deb package.\n\n```console\n$ nfpm package -p deb # Create dep package\nusing deb packager...\ncreated package: foo_1.0.0_arm64.deb\n```\n\nWhen on a Ubuntu VM, install the foo package which was created\n\n```console\n$ sudo dpkg -i foo_1.0.0_arm64.deb # Installing deb package within Ubuntu\nSelecting previously unselected package foo.\n(Reading database ... 67540 files and directories currently installed.)\nPreparing to unpack foo_1.0.0_arm64.deb ...\nUnpacking foo (1.0.0) ...\nSetting up foo (1.0.0) ...\n```\n\nLooking at `/etc/test/scripts` and viewing the permissions. Permissions are passed exactly the same as the source.\n\n```console\n$ ls -lart /etc/test/scripts\ntotal 20\n-rwxrwxrwx 1 root root 11 May 22 12:15 chmod-777.sh\n-rw-rw-rw- 1 root root 11 May 22 12:16 chmod-666.sh\n-rw-rw-r-- 1 root root 11 May 22 12:19 chmod-664.sh\ndrwxr-xr-x 3 root root 4096 May 22 13:00 ..\ndrwxr-xr-x 2 root root 4096 May 22 13:00 .\n```\n\n\n## Solution\nTo prevent world-writable files from making it into the packages, add the ability to override the default permissions of packaged files using a umask config option in the packaging spec file. This feature in nfpm would allow applying a global umask across any files being packaged, therefore, with the correct configuration, preventing world-writable files without needing to list permissions on each and every file in the package\n\n\n### Impact\n\nVulnerability is https://cwe.mitre.org/data/definitions/276.html\nhttps://www.first.org/cvss/calculator/3.0#CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:N\n\nAnyone using nfpm for creating packages and not checking/setting file permissions before packaging could result in bad permissions for files/folders."},"Suggestion":{"Summary":"Incorrect permissions in github.com/goreleaser/nfpm/v2","Description":"When nfpm packages files without additional configuration to enforce its own permissions, the files could be packaged with incorrect permissions (chmod 666 or 777). Anyone who uses nfpm to create packages and does not check or set file permissions before packaging could result in files or folders being packaged with incorrect permissions."}},{"Input":{"Module":"github.com/ipfs/go-bitfield","Description":"### Impact\nWhen feeding untrusted user input into the size parameter of `NewBitfield` and `FromBytes` functions, an attacker can trigger `panic`s.\n\nThis happen when the `size` is a not a multiple of `8` or is negative.\nThere were already a note in the `NewBitfield` documentation:\n\u003e ```\n\u003e Panics if size is not a multiple of 8.\n\u003e ````\n\nBut it incomplete and missing from `FromBytes`'s documentation.\n\nThis has been replaced by returning an `(Bitfield, error)` and returning a non nil error if the size is wrong.\n\n### Patches\n- https://github.com/ipfs/go-bitfield/commit/5e1d256fe043fc4163343ccca83862c69c52e579\n\n### Workarounds\n- Ensure `size%8 == 0 \u0026\u0026 size \u003e= 0` yourself before calling `NewBitfield` or `FromBytes`\n\n### References\n- https://github.com/ipfs/go-unixfs/security/advisories/GHSA-q264-w97q-q778\n"},"Suggestion":{"Summary":"Denial of service via malformed size parameters in github.com/ipfs/go-bitfield","Description":"When feeding untrusted user input into the size parameter of `NewBitfield` and FromBytes functions, an attacker can trigger panics. This happens when the size is a not a multiple of 8 or is negative. A workaround is to ensure size%8 == 0 \u0026\u0026 size \u003e= 0 yourself before calling NewBitfield or FromBytes."}},{"Input":{"Module":"helm.sh/helm/v3","Description":"Fuzz testing, by Ada Logics and sponsored by the CNCF, identified input to functions in the `_chartutil_` package that can cause a segmentation violation. Applications that use functions from the `_chartutil_` package in the Helm SDK can have a Denial of Service attack when they use this package and it panics.\n\n### Impact\n\nThe `_chartutil_` package contains a parser that loads a JSON Schema validation file. For example, the Helm client when rendering a chart will validate its values with the schema file. The `_chartutil_` package parses the schema file and loads it into structures Go can work with. Some schema files can cause array data structures to be created causing a memory violation.\n\nApplications that use the `_chartutil_` package in the Helm SDK to parse a schema file can suffer a Denial of Service when that input causes a panic that cannot be recovered from.\n\nThe Helm Client will panic with a schema file that causes a memory violation panic. Helm is not a long running service so the panic will not affect future uses of the Helm client.\n\n### Patches\n\nThis issue has been resolved in 3.10.3. \n\n### Workarounds\n\nSDK users can validate schema files that are correctly formatted before passing them to the `_chartutil_` functions.\n\n### For more information\n\nHelm's security policy is spelled out in detail in our [SECURITY](https://github.com/helm/community/blob/master/SECURITY.md) document.\n\n### Credits\n\nDisclosed by Ada Logics in a fuzzing audit sponsored by CNCF."},"Suggestion":{"Summary":"Denial of service via schema file in helm.sh/helm/v3","Description":"Certain JSON schema validation files can cause a Helm Client to panic, leading to a possible denial of service. The chartutil package contains a parser that loads a JSON Schema validation file. For example, the Helm client when rendering a chart will validate its values with the schema file. The chartutil package parses the schema file and loads it into memory, but some schema files can cause array data structures to be created causing a memory violation. The Helm Client will panic with a schema file that causes a memory violation panic. Helm is not a long running service so the panic will not affect future uses of the Helm client."}},{"Input":{"Module":"github.com/codenotary/immudb","Description":"### Impact\n\nIn certain scenario a malicious immudb server can provide a falsified proof that will be accepted by the client SDK signing a falsified transaction replacing the genuine one. This situation can not be triggered by a genuine immudb server and requires the client to perform a specific list of verified operations resulting in acceptance of an invalid state value.\n\nThis vulnerability only affects immudb client SDKs, the immudb server itself is not affected by this vulnerability.\n\n### Detailed description\n\nimmudb uses Merkle Tree enhanced with additional linear part to perform consistency proofs between two transactions. The linear part is built from the last leaf node of the Merkle Tree compensating for transactions that were not yet consumed by the Merkle Tree calculation.\n\nThe Merkle Tree part is then used to perform proofs for things that are in transaction range covered by the Merkle Tree where the linear part is used to check those that are not yet in the Merkle Tree.\n\nWhen doing consistency checks between two immudb states, the linear proof part is not fully checked. In fact only the first (last Merkle Tree leaf) and the last (current DB state value) are checked against new Merkle Tree without ensuring that elements in the middle of that chain are correctly added as Merkle Tree leafs.\n\nThis lack of check means that the database can present different set of hashes on the linear proof part to what would later be used once those become part of the Merkle Tree. This property can be exploited by the database to expose two different transaction entries depending on the other transaction that the user requested consistency proof for.\n\nIn practice this could lead to a following scenario:\n\n* a client requests a verified write operation\n* the server responds with a proof for the transaction\n* client stores the state value retrieved from the server and expects it to be a confirmation of that write and all the history of the database before that transaction\n* a series of validated read / write operations is performed by the client, each accompanied by a successfully validated consistency proof and update of the client state\n* the client requests verified get operation on the transaction it has written before (and that was verified with a proof from the server)\n* the server replies with a completely different transaction that can be properly validated according to the currently stored db state on the client side\n\n### Patches\n\nThe following Go SDK versions is not vulnerable:\n\n| **SDK** | **Version** |\n|-------|------------|\n| [go](pkg.go.dev/github.com/codenotary/immudb/pkg/client) | 1.4.1 |\n\n### Workarounds\n\nInvalid proofs can not be generated in a normal immudb server and will be detected by a genuine replica server.\nTo ensure that the server does not produce invalid proofs and to check that the history presented by the server\ndoes not contain falsified transactions, one should run a genuine immudb replica server in a safe environment\nand fully synchronize all databases with the primary.\n\n### References\n\n* https://github.com/codenotary/immudb/tree/master/docs/security/vulnerabilities/linear-fake\n\n### For more information\n\nIf you have any questions or comments about this advisory:\n\n* Open a discussion in [immudb Discussions](https://github.com/codenotary/immudb/discussions/new)\n* Email us at [immudb-security@codenotary.com](mailto:immudb-security@codenotary.com)\n"},"Suggestion":{"Summary":"Insufficient verification of proofs in github.com/codenotary/immudb","Description":"In certain scenarios, a malicious immudb server can provide a falsified proof that will be accepted by the client SDK signing a falsified transaction replacing the genuine one. This situation can not be triggered by a genuine immudb server and requires the client to perform a specific list of verified operations resulting in acceptance of an invalid state value. This vulnerability only affects immudb client SDKs, the immudb server itself is not affected by this vulnerability."}},{"Input":{"Module":"github.com/containernetworking/cni","Description":"An improper limitation of path name flaw was found in containernetworking/cni in versions before 0.8.1. When specifying the plugin to load in the 'type' field in the network configuration, it is possible to use special elements such as \"../\" separators to reference binaries elsewhere on the system. This flaw allows an attacker to execute other existing binaries other than the cni plugins/types, such as 'reboot'. The highest threat from this vulnerability is to confidentiality, integrity, as well as system availability."},"Suggestion":{"Summary":"Improper limitation of path name in github.com/containernetworking/cni","Description":"The FindInPath function is vulnerable to directory traversal attacks, potentially permitting attackers to execute arbitrary binaries. This function does not sanitize its plugin parameter, so parameter names containing \"../\" or other such elements may reference arbitrary locations on the filesystem."}},{"Input":{"Module":"github.com/pion/dtls/v2","Description":"### Impact\nA DTLS Client could provide a Certificate that it doesn't posses the private key for and Pion DTLS wouldn't reject it. \n\nThis issue affects users that are using Client certificates only. The connection itself is still secure. The Certificate provided by clients can't be trusted when using a Pion DTLS server prior to v2.1.5\n\n### Patches\nUpgrade to Pion DTLS v2.1.5\n\n### Workarounds\nNo workarounds available, upgrade to Pion DTLS v2.1.5\n\n### References\nThank you to [Juho Nurminen](https://github.com/jupenur) and the Mattermost team for discovering and reporting this. \n\n### For more information\nIf you have any questions or comments about this advisory:\n* Open an issue in [Pion DTLS](http://github.com/pion/dtls)\n* Email us at [team@pion.ly](mailto:team@pion.ly)"},"Suggestion":{"Summary":"Improper validation of client certificates in github.com/pion/dtls/v2","Description":"Client-provided certificates are not correctly validated, and must not be trusted. DTLS client certificates must be accompanied by proof that the client possesses the private key for the certificate. The Pion DTLS server accepted client certificates unaccompanied by this proof, permitting an attacker to present any certificate and have it accepted as valid."}},{"Input":{"Module":"mellium.im/sasl","Description":"An issue was discovered in Mellium mellium.im/sasl before 0.3.1. When performing SCRAM-based SASL authentication, if the remote end advertises support for channel binding, no random nonce is generated (instead, the nonce is empty). This causes authentication to fail in the best case, but (if paired with a remote end that does not validate the length of the nonce) could lead to insufficient randomness being used during authentication."},"Suggestion":{"Summary":"Authentication failure in mellium.im/sasl","Description":"An issue was discovered in Mellium mellium.im/sasl before 0.3.1. When performing SCRAM-based SASL authentication, if the remote end advertises support for channel binding, no random nonce is generated (instead, the nonce is empty). This causes authentication to fail in the best case, but (if paired with a remote end that does not validate the length of the nonce) could lead to insufficient randomness being used during authentication."}},{"Input":{"Module":"github.com/crossplane/crossplane-runtime","Description":"### Summary\n\nFuzz testing on `crossplane/crossplane`, by Ada Logics and sponsored by the CNCF, identified input to a function in the `fieldpath` package that can cause an out of memory panic. Applications that use the `Paved` type's `SetValue` method with user provided input without proper validation might use excessive amounts of memory and cause an out of memory panic.\n\n### Details\n\nIn the `fieldpath` package, the `SetValue` method of the `Paved` type sets a value on the inner object according to the provided path, without validating it first. This allows setting values in slices at any specific index and the code will grow the target array up to the required size. The index is currently capped at max uint32 (4294967295) given how indexes are parsed, but that is still an unnecessarily large value.\n\n### Workaround\n\nUsers can parse and validate the path before passing it to the `SetValue` method of the `Paved` type, constraining the index size as deemed appropriate.\n\n### Credits\n\nDisclosed by Ada Logics in a fuzzing audit sponsored by CNCF."},"Suggestion":{"Summary":"Out-of-memory panic in github.com/crossplane/crossplane-runtime","Description":"An out of memory panic vulnerability exists in the crossplane-runtime libraries. Applications that use the Paved type's SetValue method with user-provided input that is not properly validated might use excessive amounts of memory and cause an out of memory panic. In the fieldpath package, the Paved.SetValue method sets a value on the Paved object according to the provided path, without any validation. This allows setting values in slices at any provided index, which grows the target array up to the requested index. The index is currently capped at max uint32 (4294967295), a large value. If callers do not validate paths' indexes on their own, this could allow users to consume arbitrary amounts of memory. Applications that do not use the Paved type's SetValue method are not affected. Users unable to upgrade can work around this issue by parsing and validating the path before passing it to the SetValue method of the Paved type, constraining the index size as deemed appropriate."}},{"Input":{"Module":"helm.sh/helm/v3","Description":"Fuzz testing, by Ada Logics and sponsored by the CNCF, identified input to functions in the `_strvals_` package that can cause an out of memory panic. Out of memory panics cannot be recovered from. Applications that use functions from the `_strvals_` package in the Helm SDK can have a Denial of Service attack when they use this package and it panics.\n\n### Impact\n\nThe `_strvals_` package contains a parser that turns strings into Go structures. For example, the Helm client has command line flags like `--set`, `--set-string`, and others that enable the user to pass in strings that are merged into the values. The `_strvals_` package converts these strings into structures Go can work with. Some string inputs can cause array data structures to be created causing an out of memory panic.\n\nApplications that use the `_strvals_` package in the Helm SDK to parse user supplied input can suffer a Denial of Service when that input causes a panic that cannot be recovered from.\n\nThe Helm Client will panic with input to `--set`, `--set-string`, and other value setting flags that causes an out of memory panic. Helm is not a long running service so the panic will not affect future uses of the Helm client.\n\n### Patches\n\nThis issue has been resolved in 3.9.4. \n\n### Workarounds\n\nSDK users can validate strings supplied by users won't create large arrays causing significant memory usage before passing them to the `_strvals_` functions.\n\n### For more information\n\nHelm's security policy is spelled out in detail in our [SECURITY](https://github.com/helm/community/blob/master/SECURITY.md) document.\n\n### Credits\n\nDisclosed by Ada Logics in a fuzzing audit sponsored by CNCF."},"Suggestion":{"Summary":"Denial of service through string value parsing in helm.sh/helm/v3","Description":"Applications that use the strvals package in the Helm SDK to parse user supplied input can suffer a Denial of Service when that input causes a panic that cannot be recovered from. The strvals package contains a parser that turns strings into Go structures. For example, the Helm client has command line flags like --set, --set-string, and others that enable the user to pass in strings that are merged into the values. The strvals package converts these strings into structures Go can work with. Some string inputs can cause array data structures to be created causing an out of memory panic. The Helm Client will panic with input to --set, --set-string, and other value setting flags that causes an out of memory panic. Helm is not a long running service so the panic will not affect future uses of the Helm client."}},{"Input":{"Module":"github.com/codenotary/immudb","Description":"### Impact\n\nimmudb client SDKs use server's UUID to distinguish between different server instance so that the client can connect to different immudb instances and keep the state for multiple servers. SDK does not validate this uuid and can accept any value reported by the server. A malicious server can change the reported UUID tricking the client to treat it as a different server thus accepting a state completely irrelevant to the one previously retrieved from the server.\n\n### Patches\n\nThe following Go SDK versions are not vulnerable:\n\n| **SDK** | **Version** |\n|-------|------------|\n| [go](pkg.go.dev/github.com/codenotary/immudb/pkg/client) | 1.4.1 |\n\n### Workarounds\n\nWhen initializing an immudb client object, a custom state handler can be used to store the state. Providing custom implementation that ignores the server UUID can be used to ensure that even if the server changes the UUID, client will still consider it to be the same server.\n\n### For more information\n\nIf you have any questions or comments about this advisory:\n\n* Open a discussion in [immudb Discussions](https://github.com/codenotary/immudb/discussions/new)\n* Email us at [immudb-security@codenotary.com](mailto:immudb-security@codenotary.com)\n"},"Suggestion":{"Summary":"Improper validation of UUIDs in github.com/codenotary/immudb","Description":"A malicious server can trick a client into treating it as a different server by changing the reported UUID. immudb client SDKs use the server's UUID to distinguish between different server instance so that the client can connect to different immudb instances and keep the state for multiple servers. The SDK does not validate this UUID and accepts any value reported by the server. A malicious server can therefore change the reported UUID and trick the client into treating it as a different server."}}]