Jump to content

Progor

Members
  • Content Count

    28
  • Joined

  • Last visited

Community Reputation

38 Excellent
  1. I started trying to parse boostsets as well, and found an awesome tool to help: https://ide.kaitai.io/# part of the https://kaitai.io/ kaitai-struct project. It gives you a DSL made specifically for parsing binary files, and more importantly, an IDE that highlights the section of the binary file that each parse covers. Once you've got the file defined in their simple YAML format, it generates code in a bunch of languages to do the actual parsing. The only difficulty is the parser output is pretty verbose, mentioning every array size, string length, and offset pointer you had to use to define the file. I fixed this by naming the important variable at each level "value" and replacing the parent object with the value field whenever I see it. I've got boostsets and attrib_names (wanted to start with something easy to evaluate the tool) in a new project: https://github.com/dpwhittaker/coh-parse7 With the following "API" available so far: https://dpwhittaker.github.io/coh-parse7/boostSets.json https://dpwhittaker.github.io/coh-parse7/attribNames.json I'll probably do client messages in this format next, then try to tackle powers to see if it ends up being even more straightforward than the rust implementation. The easier it is to find and fix format changes after a release, the faster tools that rely on it can get updated.
  2. At least the recursion depth is limited by the size of the file, or I may have never emerged from the abyss!
  3. There's also some additional files that I don't think any of the projects have parsed yet. One is boostsets.bin. I may try to reverse engineer that one myself based on ourodev code, but I don't have access to homecoming code, so I'm at a disadvantage. I'm not sure what information it will or won't provide.
  4. Alright, I went ahead and did the defaults and split into categories. This made all the files around 1MB or less (a few big ones up to 5MB), which is much more doable for a web app that's pulling in around 5-10 files to get the primary, secondary, pools, boosts, and pets for a single build. If you're going to do much more than this you probably just want to load the 41MB powers.json with everything in it, including the powers possessed by Hellions in the Sewer trial. Maybe I'll make a powers.Player.json that has everything accessible to a player. There's also the powers.default.json. This file has the default values for any value that shows up in more than 20% of the powers. At each level of this object, the "sub" field represents the defaults of the object contained in that field. For instance: sub.effects.tag tells you the default value that should go in effects: [ { tag: ...} ]. This structure is more deeply nested than I thought (effects can have effects of their own), so I used a recursive function to find the defaults, and you'll probably need one to rebuild the original object from the defaults if you want to. Here's a shot of the files that are output (197 total). Feel free to look around in the github project, or you can access them API-style like this: https://dpwhittaker.github.io/sif_parser/powers/Tanker_Melee.json I'll work on the archetypes and powersets next to act as an index into these files. I'll probably end up adding some folder structure here, so don't start building anything on this yet 🙂 edit: removed the powers. prefix and moved them into a powers folder.
  5. Yep, noticed that. So, got that figured out and made the first early, hard-coded, cobbled-together, command-line app. https://github.com/dpwhittaker/sif_parser Just "cargo run" it with bin.pigg and bin_powers.pigg in the parent directory to generate a powers.json in the parent directory. and here's the output for the powers.bin: https://1drv.ms/u/s!AuGvcX8-qHq2luNtIXUr_nVnmKheqA?e=nMZT8N So, next steps are: output the other bins as json come up with a "default value" for each field and only serialize the field if it is different from the default to cut down on file size. 242MB uncompressed is a bit much for a web app split it up into categories similar @RubyRed's API This post-processing will probably happen in a language I'm more comfortable with - Node or Python. In any case, this should get it down to something that fits comfortably in github pages limits, so I'll publish it there as a "static API" similar to RubyRed's.
  6. I think there's value here as well as @RubyRed's API. This library can give you *all* the raw data, while RubyRed's API focuses on giving you the important bits for building tools without all the extras. I'll work on a command line application for exporting the results as JSON so it can be just as accessible from downstream apps. @Sif Alright, quick usage check. I don't know rust well enough to write correct syntax yet, but here's what it looks like I need to do to load powers data and write it out as json: let pigg = Pigg::new("path/to/file.pigg"); let messages = parse_messages::get_pmessages(pigg.get_data("bin/clientmessages-en.bin")); let powers = defs::decode<objects::Power>(pigg.get_data("bin/powers.bin")); foreach (power in powers) power.fix_strings(messages); writeFile("powers.json", serde_json::to_string(powers)); Does that look about right (other than slaughtering the syntax, I'm sure), or am I missing something?
  7. Reposting @Sif's work here so we don't take over @The Philotic Knight's thread: GitHub (source): https://github.com/IndexObscurum/index_datamanip Crates.io (Rust package): https://crates.io/crates/index_datamanip Docs.rs (docs): https://docs.rs/index_datamanip/ This contains the code for extracting data from Piggs, enough handling around MessageStores to go from the P-strings to localized text (the rest of the data is parsed, but discarded currently), and for deserializing Parse7-encoded bins into the passed in data structures. Additionally, it contains all of the structures (in the objects module) used for deserializing Powers/Powerset/PowerCategories/Classes (Classes has something like 2 fields unimplemented). I have some additional high level documentation on the formats I've written up that I intend to merge into the library's docs. Some work would need to be done to make this a CLI / callable from non-Rust languages (I have some ideas for both, but... I also rolled a new Blaster...)
  8. @RubyRed I was about to start working on what @Sif suggested using his library to create something like what you just did... but you've already done it, plus calculated scales, so I'll work with yours for now. Thank you both for your efforts! Two things I notice missing from the API that is useful for planner-like things are Pets (how much does the burn patch do over time? How much does a Protector Bot bubble for?) and Boosts (Enhancements). I'll add those power categories and see what comes out in the parse locally, but we may need to end up parsing boostsets.bin to get all the info necessary.
  9. Awesome! I don't know Rust yet, but I'll get started on it and see what I can do about turning it into a command line app. Thanks!
  10. Yeah, the .Net framework does have some great tools. I'm just biased towards pure javascript 1-tier and 2-tier (javascript front-end plus database api like firebase) web apps for projects like this where the majority of the effort is in the GUI and there's not a security/fairness incentive to perform sensitive code on the server side. It just makes it super easy to host them, and free in several places - github gh-pages for 1-tier, or the free tier of firebase if you need a database. Even on AWS infrastructure, it's pennies to a few dollars a month depending on traffic, size, and how many features you use. So, rather than taking over your project and forcing it to conform to my biases, I'll work in my own project on github (see previous mention of hosting for why not ourodev), but when I start working with Sif's parser, I'll make it able to export dataset-compatible xml files for you to read in as well the JSON format the web app will use. Contributions welcome if you want to join efforts on the react app, or feel free to go your own way with .net web forms.
  11. Agree. I was going to make json one output format for the web, and then convert from that to whatever format is needed for desktop. @The Philotic Knight I notice you produce an xml file of the power info. Did I see that you can read one in as well? That's probably the easiest way to get back into the desktop app (unless you want to switch to json and use the same format as the web app). Also, I see you had a .Net Web Forms project started. Is that just the default website from creating the project or is there some code already in there. If it's not already under development, then I'll probably start a new project with a react boilerplate.
  12. Hi. So I had to take a few months off from my Mobile Planner for life stuff (new job, primarily). I was about to get back into it and then saw this effort was ongoing and already had a release, and the Mid's files I was using as source data are no longer being maintained, so it's probably a better use of my time to throw my hat in the ring here than try to re-re-re-reinvent the wheel on my own thing. It looks like this is the way forward. If we could modify Sif's parser to output in the format you derived from the defs, then catching up with a new release would be a point-and-click affair rather than a painstaking process of trying to read the patch notes and in-game power descriptions and trying to back-port a def file from them. @Sif can you share your parser? I sent you a PM (on the wrong account, oops), but you haven't seen it yet. I'd be happy to work on finding the tga files or AT modifiers or whatever is left, or converting it to @The Philotic Knight's format to get us numbers-accurate with the current homecoming build. Alternatively, I can begin working on laying out the web app (in React, preferably). Just let me know where I can be the most help.
  13. Yeah, that's why they stopped calling themselves UI (user interface) designers and started calling themselves UX (user experience) designers. It's as much about having a streamlined flow through the application as it is about looking pretty. We talked this afternoon and the first thing we worked out was user flow diagrams, not artistic concepts, so he's on the same page. "intuitive and easy to use" is often harder than people realize to get right. I can handle making the application do what it needs to do correctly and consistently, but having a UX expert will help achieve that zen-like flow where every button you want to tap just happens to be right under your thumb when you need to tap it.
  14. Quick update: power selection is started. I still need to filter out powers you've already selected from the dropdown list so you can't duplicate, and add the pool powers, but it's coming along. It follows the same basic design for the Heroes screen: click the pencil to modify the power, select from a dropdown, click the info button if you want details. I'm even retaining a per-power notes box if people want to use it. Let me know if anyone wants to see a Tag Line on the powers as well. Another note: a generous forum-goer who happens to be a UX designer by day (I'll let them pipe in and identify themselves if they wish) has offered to help with the look and feel, so once he gets a chance to send me some sketches and I get them integrated, it should start looking nicer. The empty space below the power name will be filled with the slotted enhancements, and perhaps some basic effect details.
×
×
  • Create New...