-
Notifications
You must be signed in to change notification settings - Fork 163
Completion glyphs ( Bump to version 0.15.0) #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Completion glyphs ( Bump to version 0.15.0) #1
Conversation
|
Thanks! The naming of the paket.references file is actually deliberate to prevent it from affecting the project files in the tests folder (http://fsprojects.github.io/Paket/references-files.html#File-name-conventions). When you say we could consider adding the conversion to string, would that be in a later PR? It would be preferable not to break the API twice if possible. |
|
I've reversed name change in Here is my function translating Glyph number to string (based of Dave's list, but I had to make it more FunScript friendly) used in Atom plugin right now - https://github.com/fsprojects/FSharp.Atom/blob/develop/src/Autocomplete.fs#L30. We can just move it here if You think it's better choice. Or we can move function Dave linked to just provide better string names. Your call. |
|
@7sharp9: I want to use glyphs for getting completion type in autocomplete - here is current version of it |
|
looking good - is there a canonical (or something we can all agree on) single-letter representation of these glyphs? if so it would be good to include that in the return object as well as the full name. |
|
For almost all we can just use starting letter of word. Problems:
|
|
@Krzysztof-Cieslak yes I forgot to update FSharp.AutoComplete.fsproj with the new paket references filename. @7sharp9 so we should be calling It looks like it has been replaced by For the one-letter, some clashes are survivable probably. 'x' is OK for exception I think, if we only want to use one case then it is hard to distinguish method and module though. |
|
@7sharp9 ok I found that it is a wrapper in FSharp.CompilerBinding, but why is it a list of lists now rather than just a list? |
|
Why is what a list of lists? The declarations? For overloads, the list of overloads is also present in the non symbol one albeit wrapped in the declaration wrapper type. |
|
|
|
OK, I've had more of an explore, and the new API looks interesting, but it doesn't give anything more we need right now, so let's just add this feature. @Krzysztof-Cieslak, could you:
Then I'll test and merge, thanks! |
|
@rneatherway Unless your using symbols as part of navigation and searching then they wont be as useful. In XS we use the symbol to format the tooltips and autocompletions rather then using lots of string cutting/parsing operations to format for type colour etc. ToolTipText types were a real pain to work with. |
|
Yeah I was having a look over your code and it looks much easier to work with for doing that kind of formatting. Things over here are still a bit more basic, but hopefully we will migrate over to using the symbols in due course. It's hard to know exactly what to do because the display features of the various editors are pretty different. |
|
OK, so I've rebased PR and have updated implementation with translating Also I've updated JSON test results to match new Completion API |
FSharp.AutoComplete/Program.fs
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you revert this line please?
|
Thanks, two small comments, then I'll merge and do a release tomorrow. |
|
Done. |
|
Great, the release is now available at https://github.com/fsharp/FSharp.AutoComplete/releases/tag/0.15.0 |
Test with overloaded methods and request in call
run dotnet restore and wait until process exit
… Testing # Daily Perf Improver - Benchmark Infrastructure Fix ## Summary Fixed the existing benchmark infrastructure to enable cross-platform testing and establish a baseline for future performance measurements. This addresses Priority ionide#1 from the performance research plan: **Establish measurement baseline**. ## Goal and Rationale **Performance target:** Enable reliable, reproducible benchmark execution across all platforms (Linux, macOS, Windows) to support systematic performance optimization work. **Why it matters:** The existing benchmark had a hardcoded Windows file path that prevented execution in CI environments and on other platforms. Without working benchmarks, we cannot: - Establish performance baselines - Measure optimization impact - Detect performance regressions - Make data-driven optimization decisions ## Changes Made ### 1. Cross-Platform File Content Generation **Before:** ```fsharp let fileContents = IO.File.ReadAllText( @"C:\Users\jimmy\Repositories\public\TheAngryByrd\span-playground\Romeo and Juliet by William Shakespeare.txt" ) ``` **After:** ```fsharp // Generate synthetic file content for cross-platform benchmarking let fileContents = let lines = [ 1..1000 ] |> List.map (fun i -> sprintf "let value%d = %d // This is line %d with some text content" i i i) String.concat "\n" lines ``` **Benefit:** Benchmarks now run on any platform without external file dependencies. Content is realistic F# code (1000 lines of let bindings). ### 2. Updated .NET Runtime Target **Before:** `.NET 7` (`RuntimeMoniker.Net70`) **After:** `.NET 8` (`RuntimeMoniker.Net80`) **Benefit:** Matches the project's target framework (net8.0) as specified in `benchmarks/benchmarks.fsproj`, ensuring consistent measurement environment. ## Approach 1. **Analyzed existing benchmark code** to understand requirements 2. **Generated synthetic F# content** that represents realistic code patterns 3. **Updated runtime moniker** to match project configuration 4. **Applied Fantomas formatting** to maintain code style consistency 5. **Verified build success** in Release configuration ## Impact Measurement ### Build Validation ✅ **Build Success:** Benchmarks compile successfully in Release mode ``` benchmarks -> /home/runner/work/FsAutoComplete/FsAutoComplete/benchmarks/bin/Release/net8.0/benchmarks.dll Build succeeded. Time Elapsed 00:00:11.64 ``` ### Benchmark Availability The existing `SourceText_LineChanges_Benchmarks` benchmark can now be executed with: ```bash dotnet run --project benchmarks -c Release --framework net8.0 ``` **Parameterized test cases:** N ∈ {1, 15, 50, 100, 1000} iterations **Memory tracking:** Enabled via `[<MemoryDiagnoser>]` ## Trade-offs **✅ Pros:** - Eliminates external file dependency - Enables CI execution - Faster benchmark startup (no file I/O) - Consistent content across runs - Cross-platform compatibility **⚠️ Considerations:** - Synthetic content may differ from real-world text files - Fixed at 1000 lines (vs. original "Romeo and Juliet" which may have been different size) **Mitigation:** The benchmark tests `SourceText` line manipulation, not F# parsing, so synthetic F# code is appropriate. Future benchmarks can add varied file sizes. ## Validation ### ✅ Build Tests - **Release build:** Passed (11.64s) - **Framework target:** net8.0 ✓ - **Code formatting:** Applied Fantomas ✓ ### ✅ Code Review - No logic changes to benchmark behavior - Only data source and runtime version updated - Formatting follows project conventions ## Future Work This infrastructure fix enables: 1. **Baseline measurement** - Run benchmarks to establish current performance 2. **Expanded coverage** - Add benchmarks for: - LSP completion latency (Priority ionide#3 from plan) - Hover/tooltip generation - Go-to-definition performance - Type checking operations 3. **CI integration** - Add benchmark runs to detect regressions 4. **Performance tracking** - Store baseline results for comparison ## Reproducibility ### Running the Benchmarks ```bash # Build in Release mode (required for accurate results) dotnet build -c Release # Run all benchmarks dotnet run --project benchmarks -c Release --framework net8.0 # Run with specific parameters dotnet run --project benchmarks -c Release --framework net8.0 -- --filter "*SourceText*" # Export results for comparison dotnet run --project benchmarks -c Release --framework net8.0 -- --exporters json markdown ``` ### Expected Behavior - Benchmark creates 1000-line F# source text - Tests line change operations with N iterations (1, 15, 50, 100, 1000) - Reports mean time, standard deviation, and memory allocations - Outputs to `BenchmarkDotNet.Artifacts/results/` ## Related - **Research Plan:** [Discussion ionide#1](githubnext/FsAutoComplete#1) - **Performance Guides:** `.github/copilot/instructions/profiling-measurement.md` - **Daily Perf Improver Workflow:** `.github/workflows/daily-perf-improver.yml`



As discussed in fsprojects/zarchive-fsharpbinding#1010 :
Also:
paket.referencesfile naming inFSharp.AutoCompleteprojectbuild.fsxreference fromFSharp.AutoComplete.fsprojbuild.fsx,paket.lockandreadme.mdfiles tosln(I think it's standard way of managing solutions with Paket and FAKE)