| You are an expert computer security researcher. You are helping the Go programming language security team write high-quality, correct, and |
| concise summaries and descriptions for the Go vulnerability database. |
| |
| Given an affected module and a description of a vulnerability, output a JSON object containing 1) Summary: a short phrase identifying the core vulnerability, ending in the module name, and 2) Description: a plain text, one-to-two paragraph description of the vulnerability, omitting version numbers, written in the present tense that highlights the impact of the vulnerability. The description should be concise, accurate, and easy to understand. It should also be written in a style that is consistent with the existing Go vulnerability database. |
| |
| input: {"Module":"github.com/docker/distribution","Description":"### Impact\n\nSystems that rely on digest equivalence for image attestations may be vulnerable to type confusion.\n\n### Patches\n\nUpgrade to at least `v2.8.0-beta.1` if you are running `v2.x` release. If you use the code from the `main` branch, update at least to the commit after [b59a6f827947f9e0e67df0cfb571046de4733586](https://github.com/distribution/distribution/commit/b59a6f827947f9e0e67df0cfb571046de4733586).\n\n### Workarounds\n\nThere is no way to work around this issue without patching.\n\n### References\n\nDue to [an oversight in the OCI Image Specification](https://github.com/opencontainers/image-spec/pull/411) that removed the embedded `mediaType` field from manifests, a maliciously crafted OCI Container Image can cause registry clients to parse the same image in two different ways without modifying the image’s digest by modifying the `Content-Type` header returned by a registry. This can invalidate a common pattern of relying on container image digests for equivalence.\n\n### For more information\n\nIf you have any questions or comments about this advisory:\n* Open an issue in [distribution](https://github.com/distribution/distribution) \n* Open an issue in [distribution-spec](https://github.com/opencontainers/distribution-spec) \n* Email us at [cncf-distribution-security@lists.cncf.io](mailto:cncf-distribution-security@lists.cncf.io)\n"} |
| output: {"Summary":"Type confusion in github.com/docker/distribution","Description":"Systems that rely on digest equivalence for image attestations may be vulnerable to type confusion. A maliciously crafted OCI Container Image can cause registry clients to parse the same image in two different ways without modifying the image's digest, invalidating the common pattern of relying on container image digests for equivalence. This problem has been addressed in newer versions by improving validation in manifest unmarshalling."} |
| input: {"Module":"github.com/pomerium/pomerium","Description":"### Impact\nChanges to the OIDC claims of a user after initial login are not reflected in policy evaluation when using [`allowed_idp_claims`](https://www.pomerium.com/reference/#allowed-idp-claims) as part of policy. If using `allowed_idp_claims` and a user's claims are changed, Pomerium can make incorrect authorization decisions.\n\n### Patches\nv0.15.6\n\n### Workarounds\n- Clear data on `databroker` service by clearing redis or restarting the in-memory databroker to force claims to be updated\n\n### References\nhttps://github.com/pomerium/pomerium/pull/2724\n\n### For more information\nIf you have any questions or comments about this advisory:\n* Open an issue in [Pomerium](https://github.com/pomerium/pomerium)\n* Email us at [security@pomerium.com](mailto:security@pomerium.com)\n"} |
| output: {"Summary":"Incorrect authorization in github.com/pomerium/pomerium","Description":"Pomerium is an open source identity-aware access proxy. Changes to the OIDC claims of a user after initial login are not reflected in policy evaluation when using allowed_idp_claims as part of policy. If using allowed_idp_claims and a user's claims are changed, Pomerium can make incorrect authorization decisions. For users unable to upgrade clear data on databroker service by clearing redis or restarting the in-memory databroker to force claims to be updated."} |
| input: {"Module":"github.com/blevesearch/bleve","Description":"### Impact\n_What kind of vulnerability is it? Who is impacted?_\n\nBleve includes HTTP utilities under bleve/http package, that are used by its sample application. \n(https://github.com/blevesearch/bleve-explorer)\nThese HTTP methods paves way for exploitation of a node’s filesystem where the bleve index resides, \nif the user has used bleve’s own HTTP (bleve/http) handlers for exposing the access to the indexes. \nFor instance, the CreateIndexHandler (http/index_create.go) and DeleteIndexHandler (http/index_delete.go) \nenable an attacker to create a bleve index (directory structure) anywhere where the user running the server \nhas the write permissions and to delete recursively any directory owned by the same user account.\n \nUsers who have used the bleve/http package for exposing access to bleve index without the explicit \nhandling for the Role Based Access Controls(RBAC) of the index assets would be impacted.\n\n\n### Patches\n_Has the problem been patched? What versions should users upgrade to?_\n\n**No**. The http package is purely intended to be used for demonstration purposes. \nAnd bleve is never designed to be handling the RBACs or it was ever advertised to be used in that way. \nHence the collaborators of this project have decided to stay away from adding any authentication or \nauthorization to bleve project at the moment.\n\n### Workarounds\n_Is there a way for users to fix or remediate the vulnerability without upgrading?_\n\nThe bleve/http package is mainly for demonstration purposes and it lacks exhaustive validation of the user \ninputs as well as any authentication and authorization measures. \nSo it is recommended to not use that in production use cases.\n\n### For more information\nIf you have any questions or comments about this advisory:\n* Open an issue [here](https://github.com/blevesearch/bleve/issues).\n* Email us at [mailto:security@couchbase.com, fts-team@couchbase.com].\n"} |
| output: {"Summary":"No access control in github.com/blevesearch/bleve and bleve/v2","Description":"HTTP handlers provide unauthenticated access to the local filesystem. The Bleve http package is intended for demonstration purposes and contains no authentication, authorization, or validation of user inputs. Exposing handlers from this package can permit attackers to create files and delete directories."} |
| input: {"Module":"github.com/cloudflare/circl","Description":"### Impact\nWhen sampling randomness for a shared secret, the implementation of Kyber and FrodoKEM, did not check whether `crypto/rand.Read()` returns an error. In rare deployment cases (error thrown by the `Read()` function), this could lead to a predictable shared secret.\n\nThe tkn20 and blindrsa components did not check whether enough randomness was returned from the user provided randomness source. Typically the user provides `crypto/rand.Reader`, which in the vast majority of cases will always return the right number random bytes. In the cases where it does not, or the user provides a source that does not, the blinding for blindrsa is weak and integrity of the plaintext is not ensured in tkn20.\n\n\n### Patches\nThe fix was introduced in CIRCL v. 1.3.3\n"} |
| output: {"Summary":"Leaked shared secret and weak blinding in github.com/cloudflare/circl","Description":"When sampling randomness for a shared secret, the implementation of Kyber and FrodoKEM, did not check whether crypto/rand.Read() returns an error. In rare deployment cases (error thrown by the Read() function), this could lead to a predictable shared secret. The tkn20 and blindrsa components did not check whether enough randomness was returned from the user provided randomness source. Typically the user provides crypto/rand.Reader, which in the vast majority of cases will always return the right number random bytes. In the cases where it does not, or the user provides a source that does not, the blinding for blindrsa is weak and integrity of the plaintext is not ensured in tkn20."} |
| input: {"Module":"github.com/libp2p/go-libp2p","Description":"### Impact\nVersions older than `v0.18.0` of go-libp2p are vulnerable to targeted resource exhaustion attacks. These attacks target libp2p’s connection, stream, peer, and memory management. An attacker can cause the allocation of large amounts of memory, ultimately leading to the process getting killed by the host’s operating system. While a connection manager tasked with keeping the number of connections within manageable limits has been part of go-libp2p, this component was designed to handle the regular churn of peers, not a targeted resource exhaustion attack.\n\nIn the original version of the attack, the malicious node would continue opening new streams on a stream multiplexer that doesn’t provide sufficient back pressure (yamux or mplex). It is easy to defend against this one attack, but there are countless variations of this attack:\n* Opening streams and causing a non-trivial memory allocation (e.g., for multistream or protobuf parsing)\n* Creating a lot of sybil nodes and opening new connections across nodes\n\n### Patches (What to do as a go-libp2p consumer:)\n1. Update your go-libp2p dependency to go-libp2p v0.18.0 or greater (current version as of publish date is [v0.24.0](https://github.com/libp2p/go-libp2p/releases/tag/v0.24.0).)\n - Note: **It's recommend that you update to `v0.21.0` onwards** as you’ll get some useful functionality that will help in production environments like better metrics around resource usage, Grafana dashboards around resource usage, allow list support, and default autoscaling limits. [Please see the v0.21.0 release notes for more info.](https://github.com/libp2p/go-libp2p/releases/tag/v0.21.0))\n\n2. Determine appropriate limits for your application - go-libp2p sets up a resource manager with the default limits if none are provided. For default definitions please see [limits_defaults.go](https://github.com/libp2p/go-libp2p/blob/master/p2p/host/resource-manager/limit_defaults.go). These limits are also set to automatically scale, this is done using the [AutoScale method of the ScalingLimitConfig](https://github.com/libp2p/go-libp2p/blob/master/p2p/host/resource-manager/README.md#scaling-limits). We recommend you [tune your limits as described here](https://github.com/libp2p/go-libp2p/blob/master/p2p/host/resource-manager/README.md#how-to-tune-your-limits).\n\n3. Configure your node to be attack resilient. See [how to respond to an attack and identify misbehaving peers here](https://docs.libp2p.io/concepts/security/dos-mitigation/#responding-to-an-attack). Then setup automatic blocking with fail2ban using canonical libp2p log lines: [guide on how to do so here](https://docs.libp2p.io/concepts/security/dos-mitigation/#how-to-automate-blocking-with-fail2ban).\n\n#### Examples\n* Lotus’ integration can be found in https://github.com/filecoin-project/lotus/blob/master/node/modules/lp2p/rcmgr.go. Lotus reads user-configured resource limits from a limits.json file into the root directory. This allows users to share their resource manager configuration independent of any other configurations.\n* Kubo’s (formerly go-ipfs) integration can be found in https://github.com/ipfs/go-ipfs/blob/master/core/node/libp2p/rcmgr.go. Kubo reads the limits from the IPFS config file.\n\n**Note:** go-libp2p still implements the [connection manager](https://github.com/libp2p/go-libp2p/tree/master/p2p/net/connmgr) mentioned above. The connection manager is a component independent of the resource manager, which aims to keep the number of libp2p connections between a low and a high watermark. When modifying connection limits, it’s advantageous to keep the configuration of these components consistent, i.e., when setting a limit of N concurrent connections in the resource manager, the high watermark should be at most (and ideally slightly less) than N.\n\n### Workarounds\nAlthough there are no workarounds within go-libp2p, some range of attacks can be mitigated using OS tools (like manually blocking malicious peers using `iptables` or `ufw` ) or making use of a load balancer in front of libp2p nodes.\n\nHowever these require direct action \u0026 responsibility on your part and are no substitutes for upgrading go-libp2p. Therefore, we highly recommend upgrading your go-libp2p version for the way it enables tighter scoped limits and provides visibility into and easier reasoning about go-libp2p resource utilization.\n\n### References\nPlease see our DoS Mitigation page for more information on how to incorporate mitigation strategies, monitor your application, and respond to attacks: https://docs.libp2p.io/reference/dos-mitigation/. \n\nPlease see the related disclosure for rust-libp2p: https://github.com/libp2p/rust-libp2p/security/advisories/GHSA-jvgw-gccv-q5p8 and js-libp2p: https://github.com/libp2p/js-libp2p/security/advisories/GHSA-f44q-634c-jvwv\n\n#### For more information\n\nIf you have any questions or comments about this advisory email us at [security@libp2p.io](mailto:security@libp2p.io)"} |
| output: {"Summary":"Resource exhaustion in github.com/libp2p/go-libp2p","Description":"go-libp2p is vulnerable to targeted resource exhaustion attacks. These attacks target libp2p's connection, stream, peer, and memory management. An attacker can cause the allocation of large amounts of memory ultimately leading to the process getting killed by the host's operating system. While a connection manager tasked with keeping the number of connections within manageable limits has been part of go-libp2p, this component was designed to handle the regular churn of peers, not a targeted resource exhaustion attack. It's recommend to update to v0.21.0 onwards to get some useful functionality that will help in production environments like better metrics around resource usage, Grafana dashboards around resource usage, allow list support, and default autoscaling limits."} |
| input: {"Module":"github.com/supranational/blst","Description":"### Impact\n\nBlst versions v0.3.0 through 0.3.10 failed to perform a signature group-check if the call to `SigValidate` in the Go bindings was complemented with a check for infinity. Formally speaking, infinity, or the identity element of the elliptic curve group, is a member of the group, and the group-check should allow it. An initial review of blst users on GitHub did not uncover any use of this function with the complementary infinity check. This optional check was added as a convenience feature because despite infinity being a legitimate member of the group, individual signatures should never be infinite (as it is equivalent to having zero for the secret key), and observing one should raise a flag.\n\n### Description\n\nThe vulnerable function is declared as `SigValidate(sigInfcheck bool) bool` and if called with `sigInfcheck` argument set to `true`, the group-check was omitted. The group-check is required to be performed on untrusted input, because the pairing, the corner-stone operation of the signature scheme, is defined only on points that are members of a specific cyclic group, which is a subset of all the possible points on elliptic curves in question. Submitting an untrusted point outside the group opens up the possibility of accepting an alternative signature for a chosen message.\n\nIt should be noted that the SigValidate call is not the only way to perform the group-checks. There are optional checks integrated into various other verification methods, such as `Verify`, `AggregateVerify`, etc., as well as signature aggregation methods, such as `PairingAggregate*`. The reason why there are multiple options is that the group-check is a relatively expensive operation, and application developers are arguably entitled the freedom to choose when it's performed.\n\n### Patches\n\nThis issue has been resolved in the v0.3.11 release and users are advised to update if their application is affected or alternatively omit the complementary infinity check.\n\n### Credits\n\nA special thanks to Yunjong Jeong (@blukat29) for the discovery and disclosure of this vulnerability.\n"} |
| output: {"Summary":"Blst fails to perform group signature validation","Description":"When complemented with a check for infinity, blst skips performing a signature group-check. Formally speaking, infinity is the identity element of the elliptic curve group and as such it is a member of the group, so the group-check should be performed. The fix performs the check even in the presence of infinity."} |
| input: {"Module":"github.com/argoproj/argo-cd/v2","Description":"### Impact\nAll versions of Argo CD starting with v2.6.0-rc1 have an output sanitization bug which leaks repository access credentials in error messages. These error messages are visible to the user, and they are logged. The error message is visible when a user attempts to create or update an Application via the Argo CD API (and therefor the UI or CLI). The user must have `applications, create` or `applications, update` RBAC access to reach the code which may produce the error.\n\nThe user is not guaranteed to be able to trigger the error message. They may attempt to spam the API with requests to trigger a rate limit error from the upstream repository. \n\nIf the user has `repositories, update` access, they may edit an existing repository to introduce a URL typo or otherwise force an error message. But if they have that level of access, they are probably intended to have access to the credentials anyway.\n\n### Patches\n\nA patch for this vulnerability has been released in the following Argo CD version:\n\n* v2.6.1\n\n### Workarounds\n\nThe only way to completely resolve the issue is to upgrade.\n\n#### Mitigations\n\nTo mitigate the issue, make sure that your repo credentials have only least necessary privileges. For example, the credentials should not have push access, and they should not have access to more resources than what Argo CD actually needs (for example, a whole GitHub org when only one repo is needed).\n\nTo further mitigate the impact of a leaked write-capable repo credential, you could [enable commit signature verification](https://argo-cd.readthedocs.io/en/stable/user-guide/gpg-verification/#enforcing-signature-verification). Even if someone could push a malicious commit, the commit would not by synced.\n\nYou should also enforce least privileges in Argo CD RBAC. Make sure users only have `repositories, update`, `applications, update`, or `applications, create` access if they absolutely need it.\n\n### References\n\n* The problem was initially reported in a [GitHub issue](https://github.com/argoproj/argo-cd/issues/12309)\n* [Argo CD RBAC configuration documentation](https://argo-cd.readthedocs.io/en/stable/operator-manual/rbac/)\n\n### For more information\n\n* Open an issue in [the Argo CD issue tracker](https://github.com/argoproj/argo-cd/issues) or [discussions](https://github.com/argoproj/argo-cd/discussions)\n* Join us on [Slack](https://argoproj.github.io/community/join-slack) in channel #argo-cd\n"} |
| output: {"Summary":"Repository access credential leak in github.com/argoproj/argo-cd/v2","Description":"Argo CD has an output sanitization bug which leaks repository access credentials in error messages. These error messages are visible to the user, and they are logged. The error message is visible when a user attempts to create or update an Application via the Argo CD API (and therefor the UI or CLI). The user must have `applications, create` or `applications, update` RBAC access to reach the code which may produce the error. The user is not guaranteed to be able to trigger the error message. They may attempt to spam the API with requests to trigger a rate limit error from the upstream repository. If the user has `repositories, update` access, they may edit an existing repository to introduce a URL typo or otherwise force an error message."} |
| input: {"Module":"github.com/ethereum/go-ethereum","Description":"### Impact\n\nA vulnerability in the Geth EVM could cause a node to reject the canonical chain. \n\n### Description \n\nA memory-corruption bug within the EVM can cause a consensus error, where vulnerable nodes obtain a different `stateRoot` when processing a maliciously crafted transaction. This, in turn, would lead to the chain being split in two forks.\n\nAll Geth versions supporting the London hard fork are vulnerable (which predates London), so all users should update.\n\nThis bug was exploited on Mainnet at block 13107518, leading to a minority chain split. \n\n### Patches\n\nA patch is included in the `v1.10.8` release.\nThe exact patch to fix the issue is contained within this [commit](https://github.com/ethereum/go-ethereum/pull/23381/commits/4d4879cafd1b3c906fc184a8c4a357137465128f)\n\n### Workarounds\n\nNo workarounds exist, save to update and/or apply the patch commit. \n\n### References. \n\nPost-mortem [write-up](https://github.com/ethereum/go-ethereum/blob/master/docs/postmortems/2021-08-22-split-postmortem.md).\n\n### Credits\n\nThe bug was found by @guidovranken (working for [Sentnl](https://sentnl.io/) during an audit of the [Telos EVM](https://www.telos.net/evm)) and reported via bounty@ethereum.org.\n\n### For more information\nIf you have any questions or comments about this advisory:\n\n* Open an issue in [go-ethereum](https://github.com/ethereum/go-ethereum/)\n* Email us at [security@ethereum.org](mailto:security@ethereum.org)\n"} |
| output: {"Summary":"Consensus flaw during block processing in github.com/ethereum/go-ethereum","Description":"A vulnerability in the Geth EVM can cause a node to reject the canonical chain. A memory-corruption bug within the EVM can cause a consensus error, where vulnerable nodes obtain a different stateRoot when processing a maliciously crafted transaction. This, in turn, would lead to the chain being split in two forks."} |
| input: {"Module":"github.com/cometbft/cometbft","Description":"### Impact\n\nThe mempool maintains two data structures to keep track of outstanding transactions: a list and a map.\nThese two data structures are supposed to be in sync all the time in the sense that the map tracks the index (if any) of the transaction in the list. \n\nUnfortunately, it is possible to have them out of sync. When this happens, the list may contain several copies of the same transaction.\nBecause the map tracks a single index, it is then no longer possible to remove all the copies of the transaction from the list.\nThis happens even if the duplicated transaction is later committed in a block.\nThe only way to remove the transaction is by restarting the node.\n\nThese are the steps to cause the above duplication problem. Everything should happen within one height, that is no `FinalizeBlock` or `BeginBlock` ABCI calls should happen while these steps are reproduced:\n\n1. send transaction tx1 to the target full node via RPC\n2. send N more different transactions to the target full node, where N should be higher than the node's configured value for `cache_size` in `config.toml`\n3. send transaction tx1 again to the target full node\n\nOne of the copies of tx1 is now _stuck_ in the mempool's data structures. Effectively causing a memory leak, and having that node gossiping that transaction to its peers forever.\n\nThe above problem can be repeated on and on until a sizable number of transactions are stuck in the mempool, in order to try to bring down the target node.\n\nThis problem is present in releases: `v0.37.0`, and `v0.37.1`, as well as in `v0.34.28`, and all previous releases of the CometBFT repo. It will be fixed in releases `v0.34.29` and `v0.37.2`.\n\n### Patches\n\nThe PR containing the fix is [here](https://github.com/cometbft/cometbft/pull/890).\n\n\n### Workarounds\n\n* Increasing the value of `cache_size` in `config.toml` makes it very difficult to effectively attack a full node.\n* Not exposing the transaction submission RPC's would mitigate the probability of a successful attack, as the attacker would then have to create a modified (byzantine) full node to be able to perform the attack via p2p.\n\n### References\n\n* [PR](https://github.com/tendermint/tendermint/pull/2778) that introduced the map to track transactions in the mempool.\n* [PR](https://github.com/cometbft/cometbft/pull/890) containing the fix.\n"} |
| output: {"Summary":"Denial of service via OOM in github.com/cometbft/cometbft","Description":"A bug in the CometBFT middleware causes the mempool's two data structures to fall out of sync. This can lead to duplicate transactions that cannot be removed, even after they are committed in a block. The only way to remove the transaction is to restart the node. This can be exploited by an attacker to bring down a node by repeatedly submitting duplicate transactions."} |
| input: {"Module":"github.com/containers/buildah","Description":"A bug was found in Buildah where containers were created with non-empty inheritable Linux process capabilities, creating an atypical Linux environment and enabling programs with inheritable file capabilities to elevate those capabilities to the permitted set during execve(2).\n\nThis bug did not affect the container security sandbox as the inheritable set never contained more capabilities than were included in the container's bounding set.\n"} |
| output: {"Summary":"Incorrect default permissions in github.com/containers/buildah","Description":"Containers are created with non-empty inheritable Linux process capabilities, permitting programs with inheritable file capabilities to elevate those capabilities to the permitted set during execve(2). This bug does not affect the container security sandbox, as the inheritable set never contains more capabilities than are included in the container's bounding set."} |
| input: {"Module":"github.com/fluxcd/helm-controller/api","Description":"Flux controllers within the affected versions range are vulnerable to a denial of service attack. Users that have permissions to change Flux’s objects, either through a Flux source or directly within a cluster, can provide invalid data to fields `.spec.interval` or `.spec.timeout` (and structured variations of these fields), causing the entire object type to stop being processed.\n\nThe issue has two root causes: a) the Kubernetes type `metav1.Duration` not being fully compatible with the Go type `time.Duration` as explained on [upstream report](https://github.com/kubernetes/apimachinery/issues/131); b) lack of validation within Flux to restrict allowed values.\n\n### Workarounds\n\nAdmission controllers can be employed to restrict the values that can be used for fields `.spec.interval` and `.spec.timeout`, however upgrading to the latest versions is still the recommended mitigation.\n\n### Credits\n\nThis issue was reported by Alexander Block (@codablock) through the Flux security mailing list (as [recommended](https://fluxcd.io/security/#report-a-vulnerability)).\n\n### For more information\n\nIf you have any questions or comments about this advisory:\n\n- Open an issue in any of the affected repositories.\n- Contact us at the CNCF Flux channel.\n\n### References\n\n- https://github.com/kubernetes/apimachinery/issues/131\n\n"} |
| output: {"Summary":"Denial of service in flux controllers in github.com/fluxcd modules","Description":"Flux controllers are vulnerable to a denial of service attack. Users that have permissions to change Flux's objects, either through a Flux source or directly within a cluster, can provide invalid data to fields `.spec.interval` or `.spec.timeout` (and structured variations of these fields), causing the entire object type to stop being processed. The issue has two root causes: a) the Kubernetes type `metav1.Duration` is not fully compatible with the Go type `time.Duration` as explained in https://github.com/kubernetes/apimachinery/issues/131, and b) a lack of validation within Flux to restrict allowed values."} |
| input: {"Module":"helm.sh/helm/v3","Description":"Fuzz testing, by Ada Logics and sponsored by the CNCF, identified input to functions in the _strvals_ package that can cause a stack overflow. In Go, a stack overflow cannot be recovered from. Applications that use functions from the _strvals_ package in the Helm SDK can have a Denial of Service attack when they use this package and it panics.\n\n### Impact\n\nThe _strvals_ package contains a parser that turns strings into Go structures. For example, the Helm client has command line flags like `--set`, `--set-string`, and others that enable the user to pass in strings that are merged into the values. The _strvals_ package converts these strings into structures Go can work with. Some string inputs can cause array data structures to be created causing a stack overflow.\n\nApplications that use the _strvals_ package in the Helm SDK to parse user supplied input can suffer a Denial of Service when that input causes a panic that cannot be recovered from.\n\nThe Helm Client will panic with input to `--set`, `--set-string`, and other value setting flags that causes a stack overflow. Helm is not a long running service so the panic will not affect future uses of the Helm client.\n\n### Patches\n\nThis issue has been resolved in 3.10.3. \n\n### Workarounds\n\nSDK users can validate strings supplied by users won't create large arrays causing significant memory usage before passing them to the _strvals_ functions.\n\n### For more information\n\nHelm's security policy is spelled out in detail in our [SECURITY](https://github.com/helm/community/blob/master/SECURITY.md) document.\n\n### Credits\n\nDisclosed by Ada Logics in a fuzzing audit sponsored by CNCF."} |
| output: {"Summary":"Denial of service in string value parsing in helm.sh/helm/v3","Description":"Applications that use the strvals package in the Helm SDK to parse user supplied input can suffer a Denial of Service when that input causes an error that cannot be recovered from. The strvals package contains a parser that turns strings into Go structures. For example, the Helm client has command line flags like --set, --set-string, and others that enable the user to pass in strings that are merged into the values. The strvals package converts these strings into structures Go can work with. Some string inputs can cause can cause a stack overflow to be created causing a stack overflow error. Stack overflow errors cannot be recovered from. The Helm Client will panic with input to --set, --set-string, and other value setting flags that causes a stack overflow. Helm is not a long running service so the error will not affect future uses of the Helm client."} |
| input: {"Module":"github.com/runatlantis/atlantis","Description":"The package github.com/runatlantis/atlantis/server/controllers/events before 0.19.7 is vulnerable to Timing Attack in the webhook event validator code, which does not use a constant-time comparison function to validate the webhook secret. It can allow an attacker to recover this secret as an attacker and then forge webhook events."} |
| output: {"Summary":"Timing attack in github.com/runatlantis/atlantis","Description":"Validation of Gitlab requests can leak secrets. The package github.com/runatlantis/atlantis/server/controllers/events uses a non-constant time comparison for secrets while validating a Gitlab request. This allows for a timing attack where an attacker can recover a secret and then forge the request."} |
| input: {"Module":"github.com/rancher/wrangler","Description":"### Impact\n\nA denial of services (DoS) vulnerability was discovered in Wrangler Git package affecting versions up to and including `v1.0.0`.\n\nSpecially crafted Git credentials can result in a denial of service (DoS) attack on an application that uses Wrangler due to the exhaustion of the available memory and CPU resources. This is caused by a lack of input validation of Git credentials before they are used, which may lead to a denial of service in some cases. This issue can be triggered when accessing both private and public Git repositories. \n\n### Workarounds\n\nA workaround is to sanitize input passed to the Git package to remove potential unsafe and ambiguous characters. Otherwise, the best course of action is to update to a patched Wrangler version.\n\n### Patches\n\nPatched versions include `v1.0.1` and later and the backported tags - `v0.7.4-security1`, `v0.8.5-security1` and `v0.8.11`.\n\n### For more information\n\nIf you have any questions or comments about this advisory:\n\n* Reach out to [SUSE Rancher Security team](https://github.com/rancher/rancher/security/policy) for security related inquiries.\n* Open an issue in [Rancher](https://github.com/rancher/rancher/issues/new/choose) or [Wrangler](https://github.com/rancher/wrangler/issues/new) repository.\n* Verify our [support matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions/) and [product support lifecycle](https://www.suse.com/lifecycle/)."} |
| output: {"Summary":"Denial of service when processing Git credentials in github.com/rancher/wrangler","Description":"A denial of service (DoS) vulnerability exists in the Wrangler Git package. Specially crafted Git credentials can result in a denial of service (DoS) attack on an application that uses Wrangler due to the exhaustion of the available memory and CPU resources. This is caused by a lack of input validation of Git credentials before they are used, which may lead to a denial of service in some cases. This issue can be triggered when accessing both private and public Git repositories. A workaround is to sanitize input passed to the Git package to remove potential unsafe and ambiguous characters. Otherwise, the best course of action is to update to a patched Wrangler version."} |
| input: {"Module":"github.com/containerd/imgcrypt","Description":"Imgcrypt implements a function `CheckAuthorization()` that is supposed to check whether a user is authorized to access an encrypted image given the keys that the user has provided on the command line that would enable decryption of the image. The check is to prevent that a user can start a container from an image that has previously been decrypted by another user on the same system and therefore a decrypted version of the image layers may be already available in the cache locally.\n\nThe failure occurs when an image with a ManifestList is used and the architecture of the local host is not the first one in the ManifestList. In the version prior to the fix, only the first architecture in the list was tested, which may not have its layers available locally (were not pulled) since it cannot be run on the host architecture. Therefore, the verdict on unavailable layers was that the image could be run anticipating that image run failure would occur later due to the layers not being available. However, this verdict to allow the image to run lead to other architectures in the ManifestList be able to run an image without providing keys if that image had previously been decrypted. The fixed version now skips over irrelevant architectures and tests the Manifest of the local architecture, if available.\n\nKnown projects that use the `CheckAuthorization()` of imgcrypt is for example the ctr-enc client tool provided by imgcrypt. In this implementation, the call to `CheckAuthorization()` is used on the client side and could therefore also be easily circumvented by a modified client tool not calling this function.\n\nIn relation to the vulnerability in ctr-enc, affected environments would have to allow different users to invoke ctr-enc indirectly using some sort of management stack that gives user indirect access to ctr-enc.\n\nThe patch has been applied to imgcrypt v1.1.4. Workarounds may include usage of different namespaces for each remote user."} |
| output: {"Summary":"Incorrect authorization in github.com/containerd/imgcrypt","Description":"The imgcrypt library provides API extensions for containerd to support encrypted container images and implements the ctd-decoder command line tool for use by containerd to decrypt encrypted container images. The imgcrypt function `CheckAuthorization` is supposed to check whether the current used is authorized to access an encrypted image and prevent the user from running an image that another user previously decrypted on the same system. In versions prior to 1.1.4, a failure occurs when an image with a ManifestList is used and the architecture of the local host is not the first one in the ManifestList. Only the first architecture in the list was tested, which may not have its layers available locally since it could not be run on the host architecture. Therefore, the verdict on unavailable layers was that the image could be run anticipating that image run failure would occur later due to the layers not being available. However, this verdict to allow the image to run enabled other architectures in the ManifestList to run an image without providing keys if that image had previously been decrypted. A patch has been applied to imgcrypt 1.1.4. Workarounds may include usage of different namespaces for each remote user."} |
| input: {"Module":"INPUT MODULE","Description":"INPUT DESCRIPTION"} |
| output: |