[GITEA] Fix cancelled migration deletion modal
- https://codeberg.org/forgejo/forgejo/pulls/1473 made that dangerous
actions such as deletion also would need to type in the owner's name.
This was apparently not reflected to the deletion modal for migrations
that failed or were cancelled.
(cherry picked from commit c38dbd6f88)
(cherry picked from commit 7c07592d01)
(cherry picked from commit 78637af2b6)
[SHARED] make confirmation clearer for dangerous actions
- Currently the confirmation for dangerous actions such as transferring
the repository or deleting it only requires the user to ~~copy paste~~
type the repository name.
- This can be problematic when the user has a fork or another repository
with the same name as an organization's repository, and the confirmation
doesn't make clear that it could be deleting the wrong repository. While
it's mentioned in the dialog, it's better to be on the safe side and
also add the owner's name to be an element that has to be typed for
these dangerous actions.
- Added integration tests.
(cherry picked from commit bf679b24dd)
(cherry picked from commit 1963085dd9)
(cherry picked from commit fb94095d19)
(cherry picked from commit e1d1e46afe)
(cherry picked from commit 93993029e4)
(cherry picked from commit df3b058179)
(cherry picked from commit 8ccc6b9cba)
(cherry picked from commit 9fbe28fca3)
(cherry picked from commit 4ef2be6dc7)
https://codeberg.org/forgejo/forgejo/pulls/1873
Moved test from repo_test.go to forgejo_confirmation_repo_test.go to
avoid conflicts.
(cherry picked from commit 83cae67aa3)
- This is a 'front-port' of the already existing patch on v1.21 and
v1.20, but applied on top of what Gitea has done to rework the LTA
mechanism. Forgejo will stick with the reworked mechanism by the Forgejo
Security team for the time being. The removal of legacy code (AES-GCM) has been
left out.
- The current architecture is inherently insecure, because you can
construct the 'secret' cookie value with values that are available in
the database. Thus provides zero protection when a database is
dumped/leaked.
- This patch implements a new architecture that's inspired from: [Paragonie Initiative](https://paragonie.com/blog/2015/04/secure-authentication-php-with-long-term-persistence#secure-remember-me-cookies).
- Integration testing is added to ensure the new mechanism works.
- Removes a setting, because it's not used anymore.
(cherry picked from commit e3d6622a63)
(cherry picked from commit fef1a6dac5)
(cherry picked from commit b0c5165145)
(cherry picked from commit 20b5669269)
(cherry picked from commit 1574643a6a)
Update semantic version according to specification
(cherry picked from commit 22510f4130)
Mise à jour de 'Makefile'
(cherry picked from commit c3d85d8409)
(cherry picked from commit 5ea2309851)
(cherry picked from commit ec5217b9d1)
(cherry picked from commit 14f08e364b)
(cherry picked from commit b4465c67b8)
[API] [SEMVER] replace number with version
(cherry picked from commit fba48e6497)
(cherry picked from commit 532ec5d878)
[API] [SEMVER] [v1.20] less is replaced by css
(cherry picked from commit 01ca3a4f42)
(cherry picked from commit 1d928c3ab2)
(cherry picked from commit a39dc804cd)
Conflicts:
webpack.config.js
(cherry picked from commit adc68578b3)
(cherry picked from commit 9b8d98475f)
(cherry picked from commit 2516103974)
(cherry picked from commit 18e6287963)
(cherry picked from commit e9694e67ab)
(cherry picked from commit a9763edaf0)
(cherry picked from commit e2b550f4fb)
(cherry picked from commit 2edac36701)
[API] Forgejo API /api/forgejo/v1 (squash)
Update semver as v1.20 is entering release candidate mode
(cherry picked from commit 4995098ec3)
(cherry picked from commit 578ccfdd27)
(cherry picked from commit 1bf6ac0952)
(cherry picked from commit 2fe16b2bfe)
(cherry picked from commit 7cd9d027ee)
(cherry picked from commit eaed4be2ae)
(cherry picked from commit cc94f3115f)
(cherry picked from commit d7a77e35cc)
(cherry picked from commit cd8eb68ab7)
(cherry picked from commit 68487ac95f)
(cherry picked from commit 616dceb565)
(cherry picked from commit 545fe5975b)
(cherry picked from commit c042cf8eda)
(cherry picked from commit ae5e5a7468)
(cherry picked from commit 8034ef5fa2)
(cherry picked from commit aaf0293034)
(cherry picked from commit daafa8ce58)
(cherry picked from commit 7ca3681d3e)
(cherry picked from commit 39f72cba71)
(cherry picked from commit 60a5917130)
(cherry picked from commit 4853bd9e16)
[API] Move forgejo api file (squash)
- Move the file to accommodate faa28b5a44
(cherry picked from commit bce89351d2)
(cherry picked from commit 11ae7f6e85)
(cherry picked from commit 25e96cfcb2)
(cherry picked from commit 6d8d19b391)
(cherry picked from commit 5afc5c454b)
(cherry picked from commit 86d07b4c24)
(cherry picked from commit e54d869fda)
Fix#28056
This PR will check whether the repo has zero branch when pushing a
branch. If that, it means this repository hasn't been synced.
The reason caused that is after user upgrade from v1.20 -> v1.21, he
just push branches without visit the repository user interface. Because
all repositories routers will check whether a branches sync is necessary
but push has not such check.
For every repository, it has two states, synced or not synced. If there
is zero branch for a repository, then it will be assumed as non-sync
state. Otherwise, it's synced state. So if we think it's synced, we just
need to update branch/insert new branch. Otherwise do a full sync. So
that, for every push, there will be almost no extra load added. It's
high performance than yours.
For the implementation, we in fact will try to update the branch first,
if updated success with affect records > 0, then all are done. Because
that means the branch has been in the database. If no record is
affected, that means the branch does not exist in database. So there are
two possibilities. One is this is a new branch, then we just need to
insert the record. Another is the branches haven't been synced, then we
need to sync all the branches into database.
It will fix#28268 .
<img width="1313" alt="image"
src="https://github.com/go-gitea/gitea/assets/9418365/cb1e07d5-7a12-4691-a054-8278ba255bfc">
<img width="1318" alt="image"
src="https://github.com/go-gitea/gitea/assets/9418365/4fd60820-97f1-4c2c-a233-d3671a5039e9">
## ⚠️ BREAKING ⚠️
But need to give up some features:
<img width="1312" alt="image"
src="https://github.com/go-gitea/gitea/assets/9418365/281c0d51-0e7d-473f-bbed-216e2f645610">
However, such abandonment may fix#28055 .
## Backgroud
When the user switches the dashboard context to an org, it means they
want to search issues in the repos that belong to the org. However, when
they switch to themselves, it means all repos they can access because
they may have created an issue in a public repo that they don't own.
<img width="286" alt="image"
src="https://github.com/go-gitea/gitea/assets/9418365/182dcd5b-1c20-4725-93af-96e8dfae5b97">
It's a confusing design. Think about this: What does "In your
repositories" mean when the user switches to an org? Repos belong to the
user or the org?
Whatever, it has been broken by #26012 and its following PRs. After the
PR, it searches for issues in repos that the dashboard context user owns
or has been explicitly granted access to, so it causes #28268.
## How to fix it
It's not really difficult to fix it. Just extend the repo scope to
search issues when the dashboard context user is the doer. Since the
user may create issues or be mentioned in any public repo, we can just
set `AllPublic` to true, which is already supported by indexers. The DB
condition will also support it in this PR.
But the real difficulty is how to count the search results grouped by
repos. It's something like "search issues with this keyword and those
filters, and return the total number and the top results. **Then, group
all of them by repo and return the counts of each group.**"
<img width="314" alt="image"
src="https://github.com/go-gitea/gitea/assets/9418365/5206eb20-f8f5-49b9-b45a-1be2fcf679f4">
Before #26012, it was being done in the DB, but it caused the results to
be incomplete (see the description of #26012).
And to keep this, #26012 implement it in an inefficient way, just count
the issues by repo one by one, so it cannot work when `AllPublic` is
true because it's almost impossible to do this for all public repos.
1bfcdeef4c/modules/indexer/issues/indexer.go (L318-L338)
## Give up unnecessary features
We may can resovle `TODO: use "group by" of the indexer engines to
implement it`, I'm sure it can be done with Elasticsearch, but IIRC,
Bleve and Meilisearch don't support "group by".
And the real question is, does it worth it? Why should we need to know
the counts grouped by repos?
Let me show you my search dashboard on gitea.com.
<img width="1304" alt="image"
src="https://github.com/go-gitea/gitea/assets/9418365/2bca2d46-6c71-4de1-94cb-0c9af27c62ff">
I never think the long repo list helps anything.
And if we agree to abandon it, things will be much easier. That is this
PR.
## TODO
I know it's important to filter by repos when searching issues. However,
it shouldn't be the way we have it now. It could be implemented like
this.
<img width="1316" alt="image"
src="https://github.com/go-gitea/gitea/assets/9418365/99ee5f21-cbb5-4dfe-914d-cb796cb79fbe">
The indexers support it well now, but it requires some frontend work,
which I'm not good at. So, I think someone could help do that in another
PR and merge this one to fix the bug first.
Or please block this PR and help to complete it.
Finally, "Switch dashboard context" is also a design that needs
improvement. In my opinion, it can be accomplished by adding filtering
conditions instead of "switching".
This fixes a regression from #25859
If a tag has no Release, Gitea will show a Link to create a Release for
the Tag if the User has the Permission to do this, but the variable to
indicate that is no longer set.
Used here:
1bfcdeef4c/templates/repo/tag/list.tmpl (L39-L41)
Fix#25473
Although there was `m.Post("/login/oauth/access_token", CorsHandler()...`,
it never really worked, because it still lacks the "OPTIONS" handler.
The steps to reproduce it.
First, create a new oauth2 source.
Then, a user login with this oauth2 source.
Disable the oauth2 source.
Visit users -> settings -> security, 500 will be displayed.
This is because this page only load active Oauth2 sources but not all
Oauth2 sources.
Fix nil access for inactive auth sources.
> Render failed, failed to render template:
user/settings/security/security, error: template error:
builtin(static):user/settings/security/accountlinks:32:20 : executing
"user/settings/security/accountlinks" at <$providerData.IconHTML>: nil
pointer evaluating oauth2.Provider.IconHTML
Code tries to access the auth source of an `ExternalLoginUser` but the
list contains only the active auth sources.
Currently this feature is only available to admins, but there is no
clear reason why. If a user can actually merge pull requests, then this
seems fine as well.
This is useful in situations where direct pushes to the repository are
commonly done by developers.
---------
Co-authored-by: delvh <dev.lh@web.de>
Closes#27455
> The mechanism responsible for long-term authentication (the 'remember
me' cookie) uses a weak construction technique. It will hash the user's
hashed password and the rands value; it will then call the secure cookie
code, which will encrypt the user's name with the computed hash. If one
were able to dump the database, they could extract those two values to
rebuild that cookie and impersonate a user. That vulnerability exists
from the date the dump was obtained until a user changed their password.
>
> To fix this security issue, the cookie could be created and verified
using a different technique such as the one explained at
https://paragonie.com/blog/2015/04/secure-authentication-php-with-long-term-persistence#secure-remember-me-cookies.
The PR removes the now obsolete setting `COOKIE_USERNAME`.
1. Dropzone attachment removal, pretty simple replacement
2. Image diff: The previous code fetched every image twice, once via
`img[src]` and once via `$.ajax`. Now it's only fetched once and a
second time only when necessary. The image diff code was partially
rewritten.
---------
Co-authored-by: Giteabot <teabot@gitea.io>
storageHandler() is written as a middleware but is used as an endpoint
handler, and thus `next` is actually `nil`, which causes a null pointer
dereference when a request URL does not match the pattern (where it
calls `next.ServerHTTP()`).
Example CURL command to trigger the panic:
```
curl -I "http://yourhost/gitea//avatars/a"
```
Fixes#27409
---
Note: the diff looks big but it's actually a small change - all I did
was to remove the outer closure (and one level of indentation) ~and
removed the HTTP method and pattern checks as they seem redundant
because go-chi already does those checks~. You might want to check "Hide
whitespace" when reviewing it.
Alternative solution (a bit simpler): append `, misc.DummyOK` to the
route declarations that utilize `storageHandler()` - this makes it
return an empty response when the URL is invalid. I've tested this one
and it works too. Or maybe it would be better to return a 400 error in
that case (?)
This pull request is a minor code cleanup.
From the Go specification (https://go.dev/ref/spec#For_range):
> "1. For a nil slice, the number of iterations is 0."
> "3. If the map is nil, the number of iterations is 0."
`len` returns 0 if the slice or map is nil
(https://pkg.go.dev/builtin#len). Therefore, checking `len(v) > 0`
before a loop is unnecessary.
---
At the time of writing this pull request, there wasn't a lint rule that
catches these issues. The closest I could find is
https://staticcheck.dev/docs/checks/#S103
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
This PR reduces the complexity of the system setting system.
It only needs one line to introduce a new option, and the option can be
used anywhere out-of-box.
It is still high-performant (and more performant) because the config
values are cached in the config system.