Bug 6589

Summary: Nvidia driver prevents Display Manager (sddm, lightdm, lxdm,…) from starting
Product: Fedora Reporter: F. Hortner <fhortner>
Component: nvidia-kmodAssignee: Nicolas Chauvet <kwizart>
Status: RESOLVED SPAM    
Severity: normal CC: leigh123linux, leigh123linux
Priority: P1    
Version: f37   
Hardware: x86_64   
OS: GNU/Linux   
namespace:
Attachments: nvidia-bug-report without restarting sddm from tty3
nvidia-bug-report after restarting sddm from tty3
bootlog befor restarting SDDM
bootlog after restarting SDDM
bootlog SElinux disabled
bootlog Nvidia installer + kernel parameters
60-nvidia.rules

Description F. Hortner 2023-02-25 14:50:46 CET
I have recently reinstalled Fedora 37 KDE spin due to many issues when installing the proprietary Nvidia driver.

As soon as I install the rpmfusion's Nvidia driver and boot up the computer, sddm/lightdm/lxdm,... does not start, but only a black screen with a functional mouse curser appears. So I can not even log in properly.

I have tried to install the official nvidia driver from nvidia-website, that seemed to work - no issues. So I am experiencing the issue only when installing the rpmfusion nvidia driver

I have to switch to tty3 or tty4,… to log in and startx
running startx, KDE starts normally with no issues.
Also sudo systemctl restart sddm/lightdm/lxdm,... works.
But the display manager does not start

So I decided to completely uninstall the nvidia driver.
As soon as the Nvidia driver is uninstalled, SDDM works again as expected.

As soon as the Nvidia driver is installed again, SDDM also does not work anymore.

I already tried to add

xrandr --setprovideroutputsource modesetting NVIDIA-0
xrandr --auto

to /etc/sddm/Xsetup
without any effect.
Comment 1 Nicolas Chauvet 2023-02-25 17:54:46 CET
Optimus is only well supported on gdm/GNOME.

Also without any logs, we cannot tell much. Please attach the nvidia-bug-report generated archive and any relevant KDE log remotely fetched (ssh from another host) from graphical session attempt.


More likely that nvidia installer doesn't set some options that exhibit your issue with SSDM.
Comment 2 F. Hortner 2023-02-25 18:25:09 CET
I do not even have an external monitor connected.
Its just about the internal monitor (Thinkpad P1 Gen2 with Quadro T2000)

Also: no issue with original Nvidia driver from Nvidia website.

I have attached the nvidia-bug-reports

nvidia-bug-report without restarting sddm via tty3: nvidia-bug-report.log-no_restart.gz

nvidia-bug-report after restarting sddm via tty3: nvidia-bug-report.log-restart_sddm.gz
Comment 3 F. Hortner 2023-02-25 18:25:56 CET
Created attachment 2476 [details]
nvidia-bug-report without restarting sddm from tty3
Comment 4 F. Hortner 2023-02-25 18:26:36 CET
Created attachment 2477 [details]
nvidia-bug-report after restarting sddm from tty3
Comment 5 F. Hortner 2023-02-25 20:17:57 CET
to get some more details, I checked

systemctl status sddm

and

journalctl -b | grep -i sddm

with no error or warning what so ever.
Strangely enough status sddm says that sddm is running.

What kind of information would you additionally require?
Comment 6 F. Hortner 2023-02-26 22:20:53 CET
Dear Nicolas,

I completely reset the laptop (Thinkpad P1 Gen2 with a Nvidia Quadro T2000).
I also reset UEFI defaults.

Since I already reinstalled the OS, I thought, why not give F36 a try.
So I installed the latest F36 rpmfusion Nvidia driver 525.89.02.
No issue with SDDM under F36.

So I upgraded to F37. I had to reinstall the rpmfusion Nvidia driver 525.89.02 and SDDM did not load any longer.

So issue seems to start from Fedora 37.
Comment 7 F. Hortner 2023-03-05 13:08:49 CET
Any news about this issue?
Comment 8 leigh scott 2023-03-07 09:51:10 CET
(In reply to F. Hortner from comment #7)
> Any news about this issue?

Try reporting the issue upstream.


https://forums.developer.nvidia.com/c/gpu-graphics/linux/148


Also report it to fedora as it's the iris_dri.so which is crashing xorg.
Comment 9 F. Hortner 2023-03-07 10:12:32 CET
(In reply to leigh scott from comment #8)
> (In reply to F. Hortner from comment #7)
> 
> ...as it's the iris_dri.so which is crashing xorg.

could you please explain this in more detail.
Comment 10 leigh scott 2023-03-07 11:55:39 CET
(In reply to F. Hortner from comment #9)
> (In reply to leigh scott from comment #8)
> > (In reply to F. Hortner from comment #7)
> > 
> > ...as it's the iris_dri.so which is crashing xorg.
> 
> could you please explain this in more detail.

                                                   Stack trace of thread 1203:
                                                   #0  0x00007fc9895b1e5c __pthread_kill_implementation (libc.so.6 + 0x8ce5c)
                                                   #1  0x00007fc989561a76 raise (libc.so.6 + 0x3ca76)
                                                   #2  0x00007fc98954b7fc abort (libc.so.6 + 0x267fc)
                                                   #3  0x000055f798271520 OsAbort (Xorg + 0x1bd520)
                                                   #4  0x000055f7982793e5 FatalError (Xorg + 0x1c53e5)
                                                   #5  0x000055f798270129 OsSigHandler (Xorg + 0x1bc129)
                                                   #6  0x00007fc989561b20 __restore_rt (libc.so.6 + 0x3cb20)
                                                   #7  0x000055f7981d313d RRCrtcDestroyResource (Xorg + 0x11f13d)
                                                   #8  0x000055f798137cae doFreeResource (Xorg + 0x83cae)
                                                   #9  0x000055f79813a81c FreeClientResources.part.0 (Xorg + 0x8681c)
                                                   #10 0x000055f7980fd6ce main (Xorg + 0x496ce)
                                                   #11 0x00007fc98954c510 __libc_start_call_main (libc.so.6 + 0x27510)
                                                   #12 0x00007fc98954c5c9 __libc_start_main@@GLIBC_2.34 (libc.so.6 + 0x275c9)
                                                   #13 0x000055f7980fe0f5 _start (Xorg + 0x4a0f5)
                                                   
                                                   Stack trace of thread 1219:
                                                   #0  0x00007fc9895acd16 __futex_abstimed_wait_common (libc.so.6 + 0x87d16)
                                                   #1  0x00007fc9895af510 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a510)
                                                   #2  0x00007fc986f0c5dd cnd_wait (iris_dri.so + 0x10c5dd)
                                                   #3  0x00007fc986ebcd3b util_queue_thread_func (iris_dri.so + 0xbcd3b)
                                                   #4  0x00007fc986f0c50b impl_thrd_routine (iris_dri.so + 0x10c50b)
                                                   #5  0x00007fc9895b012d start_thread (libc.so.6 + 0x8b12d)
                                                   #6  0x00007fc989631bc0 __clone3 (libc.so.6 + 0x10cbc0)
Comment 11 F. Hortner 2023-03-08 10:13:37 CET
Created attachment 2480 [details]
bootlog befor restarting SDDM

hostname renamed to localhost
Comment 12 F. Hortner 2023-03-08 10:16:00 CET
Created attachment 2481 [details]
bootlog after restarting SDDM

hostname renamed to localhost.

systemd-coredump[1392]: Process 1272 (sddm-greeter) of user 986 dumped core.

and

systemd-coredump[1394]: Process 1172 (Xorg) of user 0 dumped core.

only after restarting SDDM via

sudo systemctl restart sddm
Comment 13 F. Hortner 2023-03-08 10:27:51 CET
For clarification:
I have restarted the computer --> since I only get black screen with mouse cursor I switched to tty3 --> logged in via tty3 --> created bootlog via journalctl -b > bootlog before restarting SDDM.txt

according to bootlog before restarting SDDM.txt

there is no error with mesa (iris_dri.so)


--> restarting SDDM via sudo systemctl restart sddm --> created bootlog via journalctl -b > bootlog after restarting SDDM.txt

coredumps happend after restarting SDDM via systemctl


If you check the bootlog, I have logged in before the coredumps happened

Mär 08 10:01:55 localhost systemd-logind[936]: New session c1 of user sddm.
Mär 08 10:02:19 localhost systemd-logind[936]: New session 2 of user reinhard.

Mär 08 10:02:54 localhost sudo[1386]: reinhard : TTY=tty3 ; PWD=/home/reinhard ; USER=root ; COMMAND=/usr/bin/systemctl restart sddm

Mär 08 10:02:54 localhost systemd-coredump[1392]: Process 1272 (sddm-greeter) of user 986 dumped core.
Mär 08 10:02:55 localhost systemd-coredump[1394]: Process 1172 (Xorg) of user 0 dumped core.


So I assume the black screen with mouse cursor is not related to iris_dri.so
Comment 14 leigh scott 2023-03-08 10:32:20 CET
Try disabling selinux


Mär 08 10:02:56 localhost audit[1520]: AVC avc:  denied  { read } for  pid=1520 comm="gdb" name="nvidiactl" dev="devtmpfs" ino=930 scontext=system_u:system_r:abrt_t:s0-s0:c0.c1023 tcontext=system_u:object_r:xserver_misc_device_t:s0 tclass=chr_file permissive=0
Mär 08 10:02:56 localhost audit[1520]: AVC avc:  denied  { read } for  pid=1520 comm="gdb" name="nvidiactl" dev="devtmpfs" ino=930 scontext=system_u:system_r:abrt_t:s0-s0:c0.c1023 tcontext=system_u:object_r:xserver_misc_device_t:s0 tclass=chr_file permissive=0
Mär 08 10:02:56 localhost audit[1520]: AVC avc:  denied  { read } for  pid=1520 comm="gdb" name="nvidiactl" dev="devtmpfs" ino=930 scontext=system_u:system_r:abrt_t:s0-s0:c0.c1023 tcontext=system_u:object_r:xserver_misc_device_t:s0 tclass=chr_file permissive=0
Mär 08 10:02:56 localhost audit[1520]: AVC avc:  denied  { read } for  pid=1520 comm="gdb" name="nvidiactl" dev="devtmpfs" ino=930 scontext=system_u:system_r:abrt_t:s0-s0:c0.c1023 tcontext=system_u:object_r:xserver_misc_device_t:s0 tclass=chr_file permissive=0
Mär 08 10:02:56 localhost audit[1520]: AVC avc:  denied  { read } for  pid=1520 comm="gdb" name="nvidiactl" dev="devtmpfs" ino=930 scontext=system_u:system_r:abrt_t:s0-s0:c0.c1023 tcontext=system_u:object_r:xserver_misc_device_t:s0 tclass=chr_file permissive=0
Mär 08 10:02:56 localhost audit[1520]: AVC avc:  denied  { read } for  pid=1520 comm="gdb" name="nvidiactl" dev="devtmpfs" ino=930 scontext=system_u:system_r:abrt_t:s0-s0:c0.c1023 tcontext=system_u:object_r:xserver_misc_device_t:s0 tclass=chr_file permissive=0
Mär 08 10:02:56 localhost audit[1520]: AVC avc:  denied  { read } for  pid=1520 comm="gdb" name="nvidiactl" dev="devtmpfs" ino=930 scontext=system_u:system_r:abrt_t:s0-s0:c0.c1023 tcontext=system_u:object_r:xserver_misc_device_t:s0 tclass=chr_file permissive=0
Mär 08 10:02:56 localhost audit[1520]: AVC avc:  denied  { read } for  pid=1520 comm="gdb" name="nvidia0" dev="devtmpfs" ino=931 scontext=system_u:system_r:abrt_t:s0-s0:c0.c1023 tcontext=system_u:object_r:xserver_misc_device_t:s0 tclass=chr_file permissive=0
Mär 08 10:02:56 localhost audit[1520]: AVC avc:  denied  { read } for  pid=1520 comm="gdb" name="nvidiactl" dev="devtmpfs" ino=930 scontext=system_u:system_r:abrt_t:s0-s0:c0.c1023 tcontext=system_u:object_r:xserver_misc_device_t:s0 tclass=chr_file permissive=0
Mär 08 10:02:56 localhost audit[1520]: AVC avc:  denied  { read } for  pid=1520 comm="gdb" name="nvidiactl" dev="devtmpfs" ino=930 scontext=system_u:system_r:abrt_t:s0-s0:c0.c1023 tcontext=system_u:object_r:xserver_misc_device_t:s0 tclass=chr_file permissive=0
Mär 08 10:02:56 localhost audit[1520]: AVC avc:  denied  { read } for  pid=1520 comm="gdb" name="nvidiactl" dev="devtmpfs" ino=930 scontext=system_u:system_r:abrt_t:s0-s0:c0.c1023 tcontext=system_u:object_r:xserver_misc_device_t:s0 tclass=chr_file permissive=0
Mär 08 10:02:56 localhost audit[1520]: AVC avc:  denied  { read } for  pid=1520 comm="gdb" name="nvidia0" dev="devtmpfs" ino=931 scontext=system_u:system_r:abrt_t:s0-s0:c0.c1023 tcontext=system_u:object_r:xserver_misc_device_t:s0 tclass=chr_file permissive=0
Mär 08 10:02:56 localhost audit[1520]: AVC avc:  denied  { read } for  pid=1520 comm="gdb" name="nvidiactl" dev="devtmpfs" ino=930 scontext=system_u:system_r:abrt_t:s0-s0:c0.c1023 tcontext=system_u:object_r:xserver_misc_device_t:s0 tclass=chr_file permissive=0
Mär 08 10:02:56 localhost audit[1520]: AVC avc:  denied  { read } for  pid=1520 comm="gdb" name="nvidia0" dev="devtmpfs" ino=931 scontext=system_u:system_r:abrt_t:s0-s0:c0.c1023 tcontext=system_u:object_r:xserver_misc_device_t:s0 tclass=chr_file permissive=0
Mär 08 10:02:56 localhost audit[1520]: AVC avc:  denied  { read } for  pid=1520 comm="gdb" name="nvidiactl" dev="devtmpfs" ino=930 scontext=system_u:system_r:abrt_t:s0-s0:c0.c1023 tcontext=system_u:object_r:xserver_misc_device_t:s0 tclass=chr_file permissive=0
Mär 08 10:02:56 localhost audit[1520]: AVC avc:  denied  { read } for  pid=1520 comm="gdb" name="nvidiactl" dev="devtmpfs" ino=930 scontext=system_u:system_r:abrt_t:s0-s0:c0.c1023 tcontext=system_u:object_r:xserver_misc_device_t:s0 tclass=chr_file permissive=0
Comment 15 F. Hortner 2023-03-08 10:48:13 CET
(In reply to leigh scott from comment #14)
> Try disabling selinux
> 
> 
> Mär 08 10:02:56 localhost audit[1520]: AVC avc:  denied  { read } for 
> pid=1520 comm="gdb" name="nvidiactl" dev="devtmpfs" ino=930
> scontext=system_u:system_r:abrt_t:s0-s0:c0.c1023
> tcontext=system_u:object_r:xserver_misc_device_t:s0 tclass=chr_file
> permissive=0

This happens AFTER logging in via tty3. Have a look at the timestamps.

Mär 08 10:02:19 localhost systemd-logind[936]: New session 2 of user reinhard.

Mär 08 10:02:54 localhost sudo[1386]: reinhard : TTY=tty3 ; PWD=/home/reinhard ; USER=root ; COMMAND=/usr/bin/systemctl restart sddm

Mär 08 10:02:56 localhost audit[1520]: AVC avc:  denied  { read } for  pid=1520 comm="gdb" name="nvidiactl" dev="devtmpfs" ino=930 scontext=system_u:system_r:abrt_t:s0-s0:c0.c1023 tcontext=system_u:object_r:xserver_misc_device_t:s0 tclass=chr_file permissive=0




I suggest to concentrate on erros from "bootlog BEFORE restarting SDDM"

"bootlog AFTER restarting SDDM" is just to prove that the other errors are after SDDM could not start successfully
Comment 16 F. Hortner 2023-03-08 14:12:49 CET
Created attachment 2482 [details]
bootlog SElinux disabled

I've disabled SElinux by changing the config in /etc/selinux/config from enforcing to disabled

As expected, disabling SElinux does not solve the problem, see new bootlog.
I still get a black screen with a mouse cursor.

Again, this problem does not appear when using nvidia installer.
Comment 17 Nicolas Chauvet 2023-03-08 14:29:32 CET
(In reply to F. Hortner from comment #16)

> Again, this problem does not appear when using nvidia installer.

Please disable nvidia-drm.modeset=1 which is a difference with nvidia installer (still the bug is in GDM as we cannot keep modeset disabled by default for unbroken display manager).

/usr/sbin/grubby --update-kernel=ALL --remove-args='nvidia-drm.modeset=1'
Comment 18 F. Hortner 2023-03-08 14:38:26 CET
I tried by temporarily removing the kernel parameter nvidia-drm.modeset=1
seems to work so far.

I will recheck this with the nvidia installer.

Remains the question:
Why doesn't this lead to an issue in Fedora 36 but for Fedora 37 it does?
Comment 19 Nicolas Chauvet 2023-03-08 14:51:17 CET
(In reply to F. Hortner from comment #18)
...
> Remains the question:
> Why doesn't this lead to an issue in Fedora 36 but for Fedora 37 it does?
display manager regression ?
Comment 20 F. Hortner 2023-03-08 15:09:08 CET
Created attachment 2483 [details]
bootlog Nvidia installer + kernel parameters

Bad news:
I have completely remove rpmfusion nvidia driver and installed nvidia drivers with the nvidia installer.

I have added the following kernel parameters:
nvidia-drm.modeset=1 initcall_blacklist=simpledrm_platform_driver_init

SDDM starts as expected.
Comment 21 leigh scott 2023-03-08 21:44:25 CET
(In reply to Nicolas Chauvet from comment #19)
> (In reply to F. Hortner from comment #18)
> ...
> > Remains the question:
> > Why doesn't this lead to an issue in Fedora 36 but for Fedora 37 it does?
> display manager regression ?

Maybe this?

https://github.com/sddm/sddm/issues/1636
Comment 22 F. Hortner 2023-03-09 08:59:47 CET
this issue is not related to sddm allone.
I have the same issue with other display managers as well, like lightdm and lxdm.
Comment 23 leigh scott 2023-03-09 12:00:58 CET
lightdm works fine here.


Graphics:
  Device-1: NVIDIA TU117 [GeForce GTX 1650] driver: nvidia v: 530.30.02
    arch: Turing pcie: speed: 2.5 GT/s lanes: 8 ports: active: none off: DP-2
    empty: DP-1,HDMI-A-1 bus-ID: 01:00.0 chip-ID: 10de:1f82 class-ID: 0300
  Display: x11 server: X.Org v: 1.20.14 with: Xwayland v: 22.1.8 driver: X:
    loaded: modesetting,nouveau,nvidia unloaded: fbdev,vesa alternate: nv
    gpu: nvidia,nvidia-nvswitch display-ID: :0 screens: 1
  Screen-1: 0 s-res: 3840x2160 s-dpi: 157 s-size: 621x341mm (24.45x13.43")
    s-diag: 708mm (27.89")
  Monitor-1: DP-2 note: disabled pos: primary model: Idek Iiyama PL2888UH
    serial: 16843009 res: 3840x2160 hz: 60 dpi: 157
    size: 621x341mm (24.45x13.43") diag: 708mm (27.9") modes: max: 3840x2160
    min: 640x480
  Monitor-2: Unknown-1 mapped: None-1-1 note: disabled size-res: N/A
    modes: 1024x768
  API: OpenGL v: 4.6.0 NVIDIA 530.30.02 renderer: NVIDIA GeForce GTX
    1650/PCIe/SSE2 direct render: Yes

rpm -qa lightdm\* slick\* *\nvidia\* |sort
akmod-nvidia-530.30.02-2.fc38.x86_64
kmod-nvidia-6.2.0-63.fc38.x86_64-530.30.02-2.fc38.x86_64
kmod-nvidia-6.2.1-300.fc38.x86_64-530.30.02-2.fc38.x86_64
kmod-nvidia-6.2.2-300.fc38.x86_64-530.30.02-2.fc38.x86_64
lightdm-1.32.0-3.fc38.x86_64
lightdm-gobject-1.32.0-3.fc38.x86_64
lightdm-settings-1.6.1-2.fc38.noarch
nvidia-gpu-firmware-20230210-147.fc38.noarch
nvidia-persistenced-530.30.02-1.fc39.x86_64
nvidia-settings-530.30.02-1.fc38.x86_64
nvidia-vaapi-driver-0.0.8-1.fc38.x86_64
slick-greeter-1.6.1-2.fc38.x86_64
slick-greeter-cinnamon-1.6.1-2.fc38.noarch
xorg-x11-drv-nvidia-530.30.02-1.fc39.x86_64
xorg-x11-drv-nvidia-cuda-530.30.02-1.fc39.x86_64
xorg-x11-drv-nvidia-cuda-libs-530.30.02-1.fc39.x86_64
xorg-x11-drv-nvidia-kmodsrc-530.30.02-1.fc39.x86_64
xorg-x11-drv-nvidia-libs-530.30.02-1.fc39.x86_64
xorg-x11-drv-nvidia-power-530.30.02-1.fc39.x86_64
Comment 24 F. Hortner 2023-03-09 12:12:20 CET
Lucky you, I assume that also SDDM does work fine on your system.

Fact is, with original nvidia installer, SDDM, lightdm, lxdm,... also does work fine on my machine. Even with the same kernel parameters as rpmfusion nvidia driver.


BUT: rpmfusion nvidia driver does not work, neither with sddm, lightdm nor with lxdm.

Since you know best what you have adapted from nvidia drivers, I suggest that we go through these adaptions. Thus, I could try to change these parameters and see if it works out.
Comment 25 Nicolas Chauvet 2023-03-09 13:59:33 CET
(In reply to F. Hortner from comment #24)
...
> Since you know best what you have adapted from nvidia drivers, I suggest
> that we go through these adaptions. Thus, I could try to change these
> parameters and see if it works out.

Unless you are going to pay for such service this is not how open source is working at all.

Our packaging process is totally open and if you find something that break your setup (that we weren't able to reproduce), feel free to submit a pull request.
https://github.com/rpmfusion/xorg-x11-drv-nvidia/


Thanks for your understanding.
Comment 26 leigh scott 2023-03-09 14:01:01 CET
(In reply to F. Hortner from comment #24)
> Lucky you, I assume that also SDDM does work fine on your system.
> 
> Fact is, with original nvidia installer, SDDM, lightdm, lxdm,... also does
> work fine on my machine. Even with the same kernel parameters as rpmfusion
> nvidia driver.
> 


sddm-x11 and sddm-wayland-generic also work fine here.
As the lightdm maintainer I know there are no reported issues regarding nvidia.

> 
> BUT: rpmfusion nvidia driver does not work, neither with sddm, lightdm nor
> with lxdm.
> 

They work fine here and there are no other reports.


> Since you know best what you have adapted from nvidia drivers, I suggest
> that we go through these adaptions. Thus, I could try to change these
> parameters and see if it works out.

No!, why should I take your issue seriously when you have used the nvidia run file which is known to overwrite system files.
Comment 27 F. Hortner 2023-03-09 14:07:31 CET
(In reply to leigh scott from comment #26)
> No!, why should I take your issue seriously when you have used the nvidia
> run file which is known to overwrite system files.

But you have read in my first post that I freshly installed Fedora 37, haven't you?
In addition, no issue for me to reinstall it again, without using nvidia installer at all.
Comment 28 F. Hortner 2023-03-09 14:12:56 CET
So it's fine for me to reinstall my system again from scratch.

But for information:
After reinstalling Fedora 37 (no nvidia installer used), I have installed rpmfusion repositories and installed the rpmfusion nvidia drivers. Nothing else was done.

And it did not work. I got the black screen with a functional mouse cursor.

then I have tried lightdm and lxdm as well. Also did not work.


So do I understand you correctly that you are just taking issues seriously when you have experienced them yourself?
Comment 29 F. Hortner 2023-03-09 14:15:53 CET
(In reply to Nicolas Chauvet from comment #25)
> (In reply to F. Hortner from comment #24)
> ...
> > Since you know best what you have adapted from nvidia drivers, I suggest
> > that we go through these adaptions. Thus, I could try to change these
> > parameters and see if it works out.
> 
> Unless you are going to pay for such service this is not how open source is
> working at all.
> 
> Our packaging process is totally open and if you find something that break
> your setup (that we weren't able to reproduce), feel free to submit a pull
> request.
> https://github.com/rpmfusion/xorg-x11-drv-nvidia/
> 
> 
> Thanks for your understanding.


My understanding of open source is to work together, to support each other in finding and resolving issues.

But what I understand what you told me is:
go and get your issue fixed yourself, we don't care about your problems.
Comment 30 leigh scott 2023-03-09 17:09:51 CET
(In reply to F. Hortner from comment #6)

> Since I already reinstalled the OS, I thought, why not give F36 a try.
> So I installed the latest F36 rpmfusion Nvidia driver 525.89.02.
> No issue with SDDM under F36.

The f36 nvidia driver is exactly the same as the f37 nvidia driver.
The driver uses pre-compiled libs provided by nvidia.




(In reply to F. Hortner from comment #29)

> But what I understand what you told me is:
> go and get your issue fixed yourself, we don't care about your problems.

It isn't that we don't care, we need to be able to reproduce the issue.
I don't have any optimus hardware (no one provides hardware funding) so can't work on improving optimus support.
Comment 31 F. Hortner 2023-03-09 17:49:32 CET
(In reply to leigh scott from comment #30)
> The f36 nvidia driver is exactly the same as the f37 nvidia driver.
> The driver uses pre-compiled libs provided by nvidia.

yep, this I could already find out by comparing the branches on github.

Since you mentioned iris_dri.so I also tried to downgrade all mesa related packes usin:
sudo dnf downgrade mesa*.{x86_64,i686}

with no success.


(In reply to leigh scott from comment #30)
> It isn't that we don't care, we need to be able to reproduce the issue.
> I don't have any optimus hardware (no one provides hardware funding) so
> can't work on improving optimus support.

Yea, Nicolas wants to get paied for helping.
Don't get me wrong, I appreaciate your effort but me as a medical physicist I am not in the position to understand all things you have done compared to the nvidia installer.

And saying: 
"open source is fixing yourself, come back once you have fixed it." is not going to work, even if Nicolas sees it like that.

Shall I tell the patients suffering from cancer needing radiation therapy: "look, there is the library, all knowledge is freely available - once you have found the appropriate configuration for your treatment, give me a ring ...!"

I guess this is not what you want to experience, right?


So lets put this discussion aside and lets start to find a solution together, please.

so far I have checked the bootlog and xorg.logs
are there any other logs where related errors, etc could be found?


(In reply to leigh scott from comment #26)
> ... you have used the nvidia run file which is known to overwrite system files.
according to nvidia howto: https://rpmfusion.org/Howto/NVIDIA#Recover_from_NVIDIA_installer
Recover from NVIDIA installer

The NVIDIA binary driver installer overwrite some configuration and libraries. If you want to recover to a clean state, either to use nouveau or the packaged driver, use:

rm -f /usr/lib{,64}/libGL.so.* /usr/lib{,64}/libEGL.so.*
rm -f /usr/lib{,64}/xorg/modules/extensions/libglx.so
dnf reinstall xorg-x11-server-Xorg mesa-libGL mesa-libEGL libglvnd\*
mv /etc/X11/xorg.conf /etc/X11/xorg.conf.saved

this is what I have done, so should be a clean and recovered system
Comment 32 F. Hortner 2023-03-09 18:54:42 CET
could this be somehow of interest:

Mär 09 18:11:59 localhost kernel: NVRM: loading NVIDIA UNIX x86_64 Kernel Module  525.89.02  Wed Feb  1 23:23:25 UTC 2023
Mär 09 18:11:59 localhost (udev-worker)[719]: nvidia: Process '/usr/bin/bash -c '/usr/bin/mknod -Z -m 666 /dev/nvidiactl c 195 255'' failed with exit code 1.
Mär 09 18:11:59 localhost (udev-worker)[695]: nvidia: Process '/usr/bin/bash -c '/usr/bin/mknod -Z -m 666 /dev/nvidiactl c 195 255'' failed with exit code 1.
Mär 09 18:11:59 localhost avahi-daemon[897]: Server startup complete. Host name is localhost.local. Local service cookie is 3490645352.
Mär 09 18:11:59 localhost (udev-worker)[695]: nvidia: Process '/usr/bin/bash -c 'for i in $(cat /proc/driver/nvidia/gpus/*/information | grep Minor | cut -d \  -f 4); do /usr/bin/mknod -Z -m 666 /dev/nvidia${i} c 195 ${i}; done'' failed with exit code 1.
Comment 33 leigh scott 2023-03-10 10:41:16 CET
Created attachment 2484 [details]
60-nvidia.rules

Try this file, copy it to /usr/lib/udev/rules.d/60-nvidia.rules and reboot.
Comment 34 F. Hortner 2023-03-10 11:12:31 CET
two errors remain:

Mär 10 10:57:05 localhost (udev-worker)[712]: nvidia: Process '/usr/bin/bash -c '/usr/bin/mknod -Z -m 666 /dev/nvidiactl c $(grep nvidia-frontend /proc/devices | cut -d \  -f 1) 255'' failed with exit code 1.
Mär 10 10:57:05 localhost (udev-worker)[712]: nvidia: Process '/usr/bin/bash -c 'for i in $(cat /proc/driver/nvidia/gpus/*/information | grep Minor | cut -d \  -f 4); do /usr/bin/mknod -Z -m 666 /dev/nvidia${i} c $(grep nvidia-frontend /proc/devices | cut -d \  -f 1) ${i}; done'' failed with exit code 1.

nevertheless, I found something else:
/etc/default/grub line kernel parameters contain all parameters twice.
before and after "rhgp quiet"

like so:
rd.driver.blacklist=nouveau modprobe.blacklist=nouveau nvidia-drm.modeset=1 initcall_blacklist=simpledrm_platform_driver_init rhgb quiet rd.driver.blacklist=nouveau modprobe.blacklist=nouveau nvidia-drm.modeset=1 initcall_blacklist=simpledrm_platform_driver_init

I removed all before "rhgb quiet" and also "nvidia-drm.modeset=1 initcall_blacklist=simpledrm_platform_driver_init" after "rhgb quiet"
I kept nouveau blacklisted
and recreated grub config.

display manager starts normally again

as soon as I add either of
"nvidia-drm.modeset=1"
and
"initcall_blacklist=simpledrm_platform_driver_init"

display manager does not start and I just get a black screen with mouse cursor.
Comment 35 leigh scott 2023-03-10 12:15:25 CET
Is sddm-x11 installed?
Comment 36 F. Hortner 2023-03-10 12:23:39 CET
yep,
sddm-x11-0.19.0^git20230306.572b128-1.fc37.noarch
is installed
Comment 37 F. Hortner 2023-03-24 21:46:53 CET
Sadly, removing "nvidia-drm.modeset=1" can not be the solution, since if it is removed, external monitors do not work any more.
Comment 38 F. Hortner 2023-03-25 11:59:28 CET
Issue persists in Fedora 38 with latest rpmfusion 530.30.03 drivers if displaymanager is running in X11 instead of Wayland

the strange thing is, that there is not error or warning whatsoever when

● sddm.service - Simple Desktop Display Manager
     Loaded: loaded (/usr/lib/systemd/system/sddm.service; enabled; preset: enabled)
    Drop-In: /usr/lib/systemd/system/service.d
             └─10-timeout-abort.conf
     Active: active (running) since Sat 2023-03-25 11:53:40 CET; 3min 24s ago
       Docs: man:sddm(1)
             man:sddm.conf(5)
   Main PID: 1164 (sddm)
      Tasks: 26 (limit: 38161)
     Memory: 304.7M
        CPU: 1.293s
     CGroup: /system.slice/sddm.service
             ├─1164 /usr/bin/sddm
             └─1186 /usr/libexec/Xorg -nolisten tcp -background none -seat seat0 vt2 -auth /run/sddm/xauth_uW>

Mär 25 11:53:40 localhost systemd[1]: Started sddm.service - Simple Desktop Display Manager.
Mär 25 11:53:42 localhost sddm-helper[1259]: pam_unix(sddm-greeter:session): session opened for user sddm(u>
Mär 25 11:53:42 localhost sddm-helper[1259]: Starting X11 session: "" "/usr/bin/sddm-greeter --socket /tmp/>



Same goes with journalctl -xeu sddm.service
No Error, no warning.

Mär 25 11:53:40 reikerwork1 systemd[1]: Started sddm.service - Simple Desktop Display Manager.
░░ Subject: A start job for unit sddm.service has finished successfully
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░ 
░░ A start job for unit sddm.service has finished successfully.
░░ 
░░ The job identifier is 321.
Mär 25 11:53:42 reikerwork1 sddm-helper[1259]: pam_unix(sddm-greeter:session): session opened for user sddm(u>
Mär 25 11:53:42 reikerwork1 sddm-helper[1259]: Starting X11 session: "" "/usr/bin/sddm-greeter --socket /tmp/>
Comment 39 leigh scott 2023-03-25 17:43:54 CET
(In reply to F. Hortner from comment #37)
> Sadly, removing "nvidia-drm.modeset=1" can not be the solution, since if it
> is removed, external monitors do not work any more.

Please remove that option. it isn't required with 530.30.03, it's enabled by default in the kernel module.

If that option is present it's likely to cause.

https://bugzilla.redhat.com/show_bug.cgi?id=2176782

The fedora kernel maintainer uses it to disable simpledrm.
Comment 40 F. Hortner 2023-03-25 17:54:54 CET
(In reply to leigh scott from comment #39)
> (In reply to F. Hortner from comment #37)
> > Sadly, removing "nvidia-drm.modeset=1" can not be the solution, since if it
> > is removed, external monitors do not work any more.
> 
> Please remove that option. it isn't required with 530.30.03, it's enabled by
> default in the kernel module.

this option is not present.

I reinstalled Fedora 38 on a blank machine and installed rpmfusion 530.30.02 driver.

sddm running with X11 gives a black screen + mouse cursor, also with a freshly installed Fedora 38 and rpmfusion beta 530.30.02
Comment 41 F. Hortner 2023-03-25 18:23:47 CET
Ok, say so:

issue exists for: Fedora <= 37
Display Manager running with X11 + rpmfusion Nvidia driver + Fedora <= 37 black screen + mouse cursor

There is no error or warning or whatsoever in the logs, as mentioned in comment 38:
https://bugzilla.rpmfusion.org/show_bug.cgi?id=6589#c38
Comment 42 F. Hortner 2023-03-25 18:28:51 CET
Ok, say so:

issue exists for: Fedora 37 and 38
Display Manager running with X11 + rpmfusion Nvidia driver + Fedora <= 37 black screen + mouse cursor

There is no error or warning or whatsoever in the logs, as mentioned in comment 38:
Comment 43 F. Hortner 2023-04-01 12:21:54 CEST
(In reply to leigh scott from comment #30)
> I don't have any optimus hardware (no one provides hardware funding) so
> can't work on improving optimus support.

May I ask, what where the steps you would try to identify the issue, when I send you my Thinkpad P1 Gen2?
Comment 44 F. Hortner 2023-04-07 12:36:45 CEST
(In reply to leigh scott from comment #30)
> It isn't that we don't care, we need to be able to reproduce the issue.
> I don't have any optimus hardware (no one provides hardware funding) so
> can't work on improving optimus support.

Since you are ignoring an offer of sending over a P1 Gen2 for one week, lack of optimus hardware is not the reason for your lacking interest.


Fine for me, but then:
Could you please delete all attachments and the whole thread please.

Thank you.
Comment 45 Nicolas Chauvet 2023-04-07 12:54:48 CEST
(In reply to F. Hortner from comment #44)
We are a community members. If you want SLA style professional services you can 
contact https://www.08000linux.com/accueil/ for quote. 


You can also find other entity including asking nvidia directly.
Comment 46 Nicolas Chauvet 2023-04-07 12:56:41 CEST
You can remove logs by using Details and tag as obsolete.
Comment 47 F. Hortner 2023-04-07 13:05:15 CEST
(In reply to Nicolas Chauvet from comment #46)
> You can remove logs by using Details and tag as obsolete.

even obsolete attachments can be viewd/downloaded

nevertheless, I have no option to set them obsolete.
So could you please physically delete them.

Thank you
Comment 48 F. Hortner 2023-08-22 15:11:44 CEST Comment hidden (spam)