This patchset makes warning if crate-level attribute is used at other places, obsolete attributed is used, or unknown attribute is used, since they are usually from mistakes.
Closes#3348
This is needed so that the FFI works as expected on platforms that don't
flatten aggregates the way the AMD64 ABI does, especially for `#[repr(C)]`.
This moves more of `type_of` into `trans::adt`, because the type might
or might not be an LLVM struct.
Closes#10308.
This is both useful for performance (otherwise logging is unbuffered), but also
useful for correctness. Because when a task is destroyed we can't block the task
waiting for the logger to close, loggers are opened with a 'CloseAsynchronously'
specification. This causes libuv do defer the call to close() until the next
turn of the event loop.
If you spin in a tight loop around printing, you never yield control back to the
libuv event loop, meaning that you simply enqueue a large number of close
requests but nothing is actually closed. This queue ends up never getting
closed, meaning that if you keep trying to create handles one will eventually
fail, which the runtime will attempt to print the failure, causing mass
destruction.
Caching will provide better performance as well as prevent creation of too many
handles.
Closes#10626
This is needed so that the FFI works as expected on platforms that don't
flatten aggregates the way the AMD64 ABI does, especially for `#[repr(C)]`.
This moves more of `type_of` into `trans::adt`, because the type might
or might not be an LLVM struct.
The reasons for doing this are:
* The model on which linked failure is based is inherently complex
* The implementation is also very complex, and there are few remaining who
fully understand the implementation
* There are existing race conditions in the core context switching function of
the scheduler, and possibly others.
* It's unclear whether this model of linked failure maps well to a 1:1 threading
model
Linked failure is often a desired aspect of tasks, but we would like to take a
much more conservative approach in re-implementing linked failure if at all.
Closes#8674Closes#8318Closes#8863
This is both useful for performance (otherwise logging is unbuffered), but also
useful for correctness. Because when a task is destroyed we can't block the task
waiting for the logger to close, loggers are opened with a 'CloseAsynchronously'
specification. This causes libuv do defer the call to close() until the next
turn of the event loop.
If you spin in a tight loop around printing, you never yield control back to the
libuv event loop, meaning that you simply enqueue a large number of close
requests but nothing is actually closed. This queue ends up never getting
closed, meaning that if you keep trying to create handles one will eventually
fail, which the runtime will attempt to print the failure, causing mass
destruction.
Caching will provide better performance as well as prevent creation of too many
handles.
Closes#10626
The reasons for doing this are:
* The model on which linked failure is based is inherently complex
* The implementation is also very complex, and there are few remaining who
fully understand the implementation
* There are existing race conditions in the core context switching function of
the scheduler, and possibly others.
* It's unclear whether this model of linked failure maps well to a 1:1 threading
model
Linked failure is often a desired aspect of tasks, but we would like to take a
much more conservative approach in re-implementing linked failure if at all.
Closes#8674Closes#8318Closes#8863
I added a test case which does not compile today, and required changes on
privacy's side of things to get right. Additionally, this moves a good bit of
logic which did not belong in reachability into privacy.
All of reachability should solely be responsible for determining what the
reachable surface area of a crate is given the exported surface area (where the
exported surface area is that which is usable by external crates).
Privacy will now correctly figure out what's exported by deeply looking
through reexports. Previously if a module were reexported under another name,
nothing in the module would actually get exported in the executable. I also
consolidated the phases of privacy to be clearer about what's an input to what.
The privacy checking pass no longer uses the notion of an "all public" path, and
the embargo visitor is no longer an input to the checking pass.
Currently the embargo visitor is built as a saturating analysis because it's
unknown what portions of the AST are going to get re-exported.
This also cracks down on exported methods from impl blocks and trait blocks. If you implement a private trait, none of the symbols are exported, and if you have an impl for a private type none of the symbols are exported either. On the other hand, if you implement a public trait for a private type, the symbols are still exported. I'm unclear on whether this last part is correct, but librustc will fail to link unless it's in place.
I added a test case which does not compile today, and required changes on
privacy's side of things to get right. Additionally, this moves a good bit of
logic which did not belong in reachability into privacy.
All of reachability should solely be responsible for determining what the
reachable surface area of a crate is given the exported surface area (where the
exported surface area is that which is usable by external crates).
Privacy will now correctly figure out what's exported by deeply looking
through reexports. Previously if a module were reexported under another name,
nothing in the module would actually get exported in the executable. I also
consolidated the phases of privacy to be clearer about what's an input to what.
The privacy checking pass no longer uses the notion of an "all public" path, and
the embargo visitor is no longer an input to the checking pass.
Currently the embargo visitor is built as a saturating analysis because it's
unknown what portions of the AST are going to get re-exported.
This was needed to access UEFI boot services in my new Boot2Rust experiment.
I also realized that Rust functions declared as extern always use the C calling convention regardless of how they were declared, so this pull request fixes that as well.
This implements a fair amount of the unimpl() functionality in io::native
relating to filesystem operations. I've also modified all io::fs tests to run in
both a native and uv environment (so everything is actually tested).
There are a few bits of remaining functionality which I was unable to get
working:
* truncate on windows
* change_file_times on windows
* lstat on windows
I think that change_file_times may just need a better interface, but the other
two have large implementations in libuv which I didn't want to tackle trying to
copy. I found a `chsize` function to work for truncate on windows, but it
doesn't quite seem to be working out.
This implements a fair amount of the unimpl() functionality in io::native
relating to filesystem operations. I've also modified all io::fs tests to run in
both a native and uv environment (so everything is actually tested).
There are a two bits of remaining functionality which I was unable to get
working:
* change_file_times on windows
* lstat on windows
I think that change_file_times may just need a better interface, but lstat has a
large implementation in libuv which I didn't want to tackle trying to copy.
If a function is marked as external, then it's likely desired for use with some
native library, so we're not really accomplishing a whole lot by internalizing
all of these symbols.
If a function is marked as external, then it's likely desired for use with some
native library, so we're not really accomplishing a whole lot by internalizing
all of these symbols.
* moved `extern` inside module
* changed `extern "stdcall"` to `extern "system"`
* changed `cfg(target_os="win32")` to `cfg(windows)`
* only run on Windows && x86, (not x86_64)
* updated copyright dates
There was a syntax error because the `extern "stdcall"` was outside the module instead of inside it.
* moved `extern` inside module
* change `extern "stdcall"` to `extern "system"`
* change `cfg(target_os="win32")` to `cfg(windows)`
* updated copyright dates
* changed log(error, ...) => info!(....)
* added `pub` keyword to kernel32 functions
Rename {struct-update,fsu}-moves-and-copies, since win32
failed to run the test since UAC prevents any executable whose
name contaning "update". (#10452)
Some tests related to #9205 are expected to fail on gcc 4.8,
so they are marked as `xfail-win32` instead of `xfail-fast`.
Some tests using `extra::tempfile` fail on win32 due to #10462.
Mark them as `xfail-win32`.
This commit re-organizes the io::native module slightly in order to have a
working implementation of rtio::IoFactory which uses native implementations. The
goal is to seamlessly multiplex among libuv/native implementations wherever
necessary.
Right now most of the native I/O is unimplemented, but we have existing bindings
for file descriptors and processes which have been hooked up. What this means is
that you can now invoke println!() from libstd with no local task, no local
scheduler, and even without libuv.
There's still plenty of work to do on the native I/O factory, but this is the
first steps into making it an official portion of the standard library. I don't
expect anyone to reach into io::native directly, but rather only std::io
primitives will be used. Each std::io interface seamlessly falls back onto the
native I/O implementation if the local scheduler doesn't have a libuv one
(hurray trait ojects!)
These two attributes are no longer useful now that Rust has decided to leave
segmented stacks behind. It is assumed that the rust task's stack is always
large enough to make an FFI call (due to the stack being very large).
There's always the case of stack overflow, however, to consider. This does not
change the behavior of stack overflow in Rust. This is still normally triggered
by the __morestack function and aborts the whole process.
C stack overflow will continue to corrupt the stack, however (as it did before
this commit as well). The future improvement of a guard page at the end of every
rust stack is still unimplemented and is intended to be the mechanism through
which we attempt to detect C stack overflow.
Closes#8822Closes#10155
This was marked xfail-test
According to rust's issue #912, this will not be fixed.
There's no point in keeping the test if it is never intended to pass.
The logging macros all use libuv-based I/O, and there was one stray debug
statement in task::spawn which was executing before the I/O context was ready.
Remove it and add a test to make sure that we can continue to debug this sort of
code.
Closes#10405